perm filename MSG2.MSG[JNK,JMC]1 blob
sn#738373 filedate 1984-01-17 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00305 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00040 00002 ∂11-Nov-83 0858 BMACKEN@SRI-AI.ARPA Visit of CSLI Advisory Panel
C00043 00003 ∂11-Nov-83 0940 JF@SU-SCORE.ARPA fifth speaker for the 21st
C00045 00004 ∂11-Nov-83 1401 LENAT@SU-SCORE.ARPA FORUM SCHEDULING (important)
C00050 00005 ∂11-Nov-83 1432 PETERS@SRI-AI.ARPA House sitter wanted
C00051 00006 ∂11-Nov-83 1437 LENAT@SU-SCORE.ARPA Invitation to my Colloq 11/15
C00052 00007 ∂11-Nov-83 2131 BRODER@SU-SCORE.ARPA Abstract for Kirkpatrick's talk.
C00055 00008 ∂13-Nov-83 0227 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #51
C00073 00009 ∂13-Nov-83 1536 @SRI-AI.ARPA:vardi%SU-HNV.ARPA@SU-SCORE.ARPA Knowledge Seminar
C00075 00010 ∂13-Nov-83 1708 ALMOG@SRI-AI.ARPA reminder on why context wont go away
C00077 00011 ∂14-Nov-83 0222 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #52
C00091 00012 ∂14-Nov-83 1152 ELYSE@SU-SCORE.ARPA Overseas Studies Centers
C00093 00013 ∂14-Nov-83 1324 ELYSE@SU-SCORE.ARPA Annual Faculty Reports
C00094 00014 ∂14-Nov-83 1446 MWALKER@SU-SCORE.ARPA Professor Random
C00095 00015 ∂14-Nov-83 1702 LAWS@SRI-AI.ARPA AIList Digest V1 #97
C00120 00016 ∂14-Nov-83 1831 LAWS@SRI-AI.ARPA AIList Digest V1 #96
C00137 00017 ∂14-Nov-83 2241 GOLUB@SU-SCORE.ARPA Congratulations!
C00138 00018 ∂14-Nov-83 2244 GOLUB@SU-SCORE.ARPA meeting
C00139 00019 ∂15-Nov-83 1453 @SRI-AI.ARPA:BrianSmith.PA@PARC-MAXC.ARPA Lisp As Language Course Change of Plans
C00144 00020 ∂15-Nov-83 1506 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Nov. 17th
C00148 00021 ∂15-Nov-83 1516 @SRI-AI.ARPA:BrianSmith.pa@PARC-MAXC.ARPA Lisp As Language Course, P.S.
C00151 00022 ∂15-Nov-83 1531 KJB@SRI-AI.ARPA Advisory Panel
C00153 00023 ∂15-Nov-83 1641 KJB@SRI-AI.ARPA p.s. on Advisory Panel
C00154 00024 ∂15-Nov-83 1717 @SRI-AI.ARPA:Nuyens.pa@PARC-MAXC.ARPA Re: Lisp As Language Course Change of Plans
C00156 00025 ∂15-Nov-83 1838 LAWS@SRI-AI.ARPA AIList Digest V1 #98
C00180 00026 ∂15-Nov-83 2041 @SRI-AI.ARPA:sag%Psych.#Pup@SU-SCORE.ARPA Thursday Indian Dinner
C00183 00027 ∂15-Nov-83 2114 @SRI-AI.ARPA:vardi@diablo Knowledge Seminar
C00185 00028 ∂15-Nov-83 2200 Winograd.PA@PARC-MAXC.ARPA AI and the military
C00202 00029 ∂15-Nov-83 2319 PKARP@SU-SCORE.ARPA Fall Potluck
C00204 00030 ∂16-Nov-83 1032 @MIT-MC:RICKL%MIT-OZ@MIT-MC limitations of logic
C00207 00031 ∂16-Nov-83 1256 @SRI-AI.ARPA:withgott.pa@PARC-MAXC.ARPA Re: Transportation for Fodor and Partee
C00208 00032 ∂16-Nov-83 1351 @MIT-MC:Batali@MIT-OZ limitations of logic
C00211 00033 ∂16-Nov-83 1454 KJB@SRI-AI.ARPA Friday afternoon
C00213 00034 ∂16-Nov-83 1522 @MIT-MC:DAM%MIT-OZ@MIT-MC limitations of logic
C00217 00035 ∂16-Nov-83 1638 @MIT-MC:Hewitt%MIT-OZ@MIT-MC limitations of logic
C00222 00036 ∂16-Nov-83 1654 @MIT-MC:Hewitt%MIT-OZ@MIT-MC limitations of logic
C00227 00037 ∂16-Nov-83 1734 PATASHNIK@SU-SCORE.ARPA towards a more perfect department
C00229 00038 ∂16-Nov-83 1906 LAWS@SRI-AI.ARPA AIList Digest V1 #99
C00259 00039 ∂16-Nov-83 2059 JF@SU-SCORE.ARPA schedule
C00261 00040 ∂16-Nov-83 2106 DKANERVA@SRI-AI.ARPA Newsletter No. 9, November 17, 1983
C00286 00041 ∂16-Nov-83 2133 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
C00292 00042 ∂16-Nov-83 2147 GOLUB@SU-SCORE.ARPA Search for Chairman
C00293 00043 ∂16-Nov-83 2224 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
C00297 00044 ∂16-Nov-83 2256 @MIT-MC:KDF%MIT-OZ@MIT-MC Re: limitations of logic
C00301 00045 ∂17-Nov-83 0058 @MIT-MC:JMC@SU-AI limitations of logic
C00310 00046 ∂17-Nov-83 0908 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
C00313 00047 ∂17-Nov-83 0918 @MIT-MC:Hewitt%MIT-OZ@MIT-MC limitations of logic
C00319 00048 ∂17-Nov-83 0920 DKANERVA@SRI-AI.ARPA On-line copy of CSLI Newsletter
C00321 00049 ∂17-Nov-83 0934 @MIT-MC:DAM%MIT-OZ@MIT-MC TMSing
C00326 00050 ∂17-Nov-83 0948 @MIT-MC:Hewitt%MIT-OZ@MIT-MC limitations of logic
C00331 00051 ∂17-Nov-83 0959 @MIT-MC:DAM%MIT-OZ@MIT-MC The meaning of Theories
C00333 00052 ∂17-Nov-83 1011 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
C00337 00053 ∂17-Nov-83 1047 TAJNAI@SU-SCORE.ARPA IBM Wine and Cheese Party for Everyone
C00338 00054 ∂17-Nov-83 1127 @MIT-MC:Tong.PA@PARC-MAXC Re: limitations of logic
C00342 00055 ∂17-Nov-83 1144 @MIT-MC:JERRYB%MIT-OZ@MIT-MC [KDF at MIT-AI: limitations of logic]
C00346 00056 ∂17-Nov-83 1421 JF@SU-SCORE.ARPA finding the room
C00348 00057 ∂17-Nov-83 1652 @MIT-MC:KDF%MIT-OZ@MIT-MC Re: limitations of logic
C00353 00058 ∂17-Nov-83 1654 @MIT-MC:KDF%MIT-OZ@MIT-MC What to do until clarification comes
C00356 00059 ∂17-Nov-83 2112 @MIT-MC:HEWITT@MIT-XX I think the new mail system ate the first try
C00361 00060 ∂17-Nov-83 2135 @MIT-MC:HEWITT@MIT-XX The meaning of Theories
C00364 00061 ∂18-Nov-83 0927 PATASHNIK@SU-SCORE.ARPA student bureaucrat electronic address
C00366 00062 ∂18-Nov-83 0936 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
C00369 00063 ∂18-Nov-83 1006 @MIT-MC:KDF%MIT-OZ@MIT-MC Re: limitations of logic
C00373 00064 ∂18-Nov-83 1025 BMACKEN@SRI-AI.ARPA Meetings with the Advisory Panel
C00375 00065 ∂18-Nov-83 1025 @MIT-MC:DIETTERICH@SUMEX-AIM Re: limitations of logic
C00380 00066 ∂18-Nov-83 1033 JF@SU-SCORE.ARPA finding the room for BATS
C00382 00067 ∂18-Nov-83 1056 @MIT-MC:DAM%MIT-OZ@MIT-MC limitations of logic
C00385 00068 ∂18-Nov-83 1110 @MIT-MC:DAM%MIT-OZ@MIT-MC The meaning of Theories
C00387 00069 ∂18-Nov-83 1139 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
C00388 00070 ∂18-Nov-83 1147 @MIT-MC:Agha%MIT-OZ@MIT-MC First-Order logic and Human Knowledge
C00393 00071 ∂18-Nov-83 1209 @SRI-AI.ARPA:CLT@SU-AI SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
C00395 00072 ∂19-Nov-83 0106 NET-ORIGIN@MIT-MC Re: limitations of logic
C00397 00073 ∂19-Nov-83 1533 ARK@SU-SCORE.ARPA reminder
C00398 00074 ∂19-Nov-83 1537 ARK@SU-SCORE.ARPA reminder
C00399 00075 ∂19-Nov-83 2258 @MIT-MC:Laws@SRI-AI Overlap with AIList
C00404 00076 ∂20-Nov-83 1008 PETERS@SRI-AI.ARPA Building Planning
C00407 00077 ∂20-Nov-83 1722 LAWS@SRI-AI.ARPA AIList Digest V1 #100
C00427 00078 ∂20-Nov-83 2100 LAWS@SRI-AI.ARPA AIList Digest V1 #101
C00452 00079 ∂21-Nov-83 0222 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #53
C00469 00080 ∂21-Nov-83 1021 @MIT-MC:Laws@SRI-AI AIList
C00471 00081 ∂21-Nov-83 1025 @MIT-MC:marcus@AEROSPACE Distribution list
C00472 00082 ∂21-Nov-83 1119 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: AIList
C00475 00083 ∂21-Nov-83 1154 KJB@SRI-AI.ARPA Fujimura'a visit
C00476 00084 ∂21-Nov-83 1311 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: AIList
C00478 00085 ∂21-Nov-83 1311 ALMOG@SRI-AI.ARPA reminder on why context wont go away
C00481 00086 ∂21-Nov-83 1315 STOLFI@SU-SCORE.ARPA Re: towards a more perfect department
C00483 00087 ∂21-Nov-83 1521 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
C00496 00088 ∂21-Nov-83 1639 @MIT-MC:perlis%umcp-cs@CSNET-CIC Logic in a teacup
C00505 00089 ∂21-Nov-83 1856 GOLUB@SU-SCORE.ARPA CSD Chairperson Extraordinaire Required
C00507 00090 ∂21-Nov-83 2002 GOLUB@SU-SCORE.ARPA [Robert L. White <WHITE@SU-SIERRA.ARPA>: Space]
C00509 00091 ∂21-Nov-83 2016 GOLUB@SU-SCORE.ARPA lunch
C00510 00092 ∂21-Nov-83 2017 GOLUB@SU-SCORE.ARPA Disclosure form
C00511 00093 ∂21-Nov-83 2212 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
C00519 00094 ∂22-Nov-83 0156 NET-ORIGIN@MIT-MC Policy for Redistribution, Reproduction, and Republication of Messages
C00521 00095 ∂22-Nov-83 0742 PATASHNIK@SU-SCORE.ARPA informal departmental lunch
C00523 00096 ∂22-Nov-83 1009 GOLUB@SU-SCORE.ARPA [Jeffrey D. Ullman <ULLMAN@SU-SCORE.ARPA>: Re: lunch]
C00525 00097 ∂22-Nov-83 1013 @MIT-MC:GAVAN%MIT-OZ@MIT-MC limitations of logic
C00531 00098 ∂22-Nov-83 1335 @MIT-MC:BERWICK%MIT-OZ@MIT-MC limitations of logic
C00534 00099 ∂22-Nov-83 1541 @MIT-MC:MONTALVO%MIT-OZ@MIT-MC reasoning about inconsistency
C00538 00100 ∂22-Nov-83 1724 LAWS@SRI-AI.ARPA AIList Digest V1 #102
C00568 00101 ∂22-Nov-83 2118 @MIT-MC:KDF%MIT-OZ@MIT-MC limitations of logic
C00570 00102 ∂23-Nov-83 0229 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #54
C00597 00103 ∂23-Nov-83 0553 @MIT-MC:HEWITT@MIT-XX limitations of logic
C00600 00104 ∂23-Nov-83 0604 @MIT-MC:HEWITT@MIT-XX limitations of logic
C00602 00105 ∂23-Nov-83 0958 KJB@SRI-AI.ARPA Advisory Panel's Visit
C00608 00106 ∂23-Nov-83 0959 KJB@SRI-AI.ARPA Fujimura's visit
C00609 00107 ∂23-Nov-83 1005 @MIT-MC:mclean@NRL-CSS perlis on tarski and meaning
C00614 00108 ∂23-Nov-83 1006 @MIT-MC:DAM%MIT-OZ@MIT-MC limitations of logic
C00617 00109 ∂23-Nov-83 1008 KJB@SRI-AI.ARPA "Joan's committee"
C00619 00110 ∂23-Nov-83 1052 @MIT-MC:JCMA%MIT-OZ@MIT-MC perlis on tarski and meaning
C00622 00111 ∂23-Nov-83 1600 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
C00629 00112 ∂23-Nov-83 1720 KJB@SRI-AI.ARPA
C00630 00113 ∂23-Nov-83 1729 DKANERVA@SRI-AI.ARPA Newsletter No. 10, November 24, 1983
C00643 00114 ∂23-Nov-83 2032 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
C00649 00115 ∂24-Nov-83 1748 @MIT-MC:perlis%umcp-cs@CSNET-CIC Tarski and meaning
C00655 00116 ∂24-Nov-83 1809 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: reasoning about inconsistency
C00660 00117 ∂24-Nov-83 2246 @MIT-MC:GAVAN%MIT-OZ@MIT-MC limitations of logic
C00670 00118 ∂25-Nov-83 0220 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #55
C00682 00119 ∂25-Nov-83 1520 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
C00692 00120 ∂25-Nov-83 1603 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
C00696 00121 ∂25-Nov-83 1731 @SRI-AI.ARPA:GOGUEN@SRI-CSL rewrite rule seminar
C00699 00122 ∂26-Nov-83 0339 @MIT-MC:GAVAN%MIT-OZ@MIT-MC limitations of logic
C00714 00123 ∂26-Nov-83 1114 GOLUB@SU-SCORE.ARPA Faculty lunch
C00715 00124 ∂26-Nov-83 1311 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
C00720 00125 ∂26-Nov-83 1351 @MIT-MC:Batali%MIT-OZ@MIT-MC Consistency and the Real World
C00726 00126 ∂26-Nov-83 1537 @MIT-MC:DAM%MIT-OZ@MIT-MC Edited Mailing List
C00730 00127 ∂26-Nov-83 1820 @MIT-MC:JMC@SU-AI
C00733 00128 ∂27-Nov-83 0427 @MIT-MC:GAVAN%MIT-OZ@MIT-MC limitations of logic
C00736 00129 ∂27-Nov-83 1032 KJB@SRI-AI.ARPA [Y. Moschovakis <oac5!ynm@UCLA-CS>: Abstract of talk]
C00739 00130 ∂27-Nov-83 2131 LAWS@SRI-AI.ARPA AIList Digest V1 #103
C00763 00131 ∂28-Nov-83 0709 @MIT-MC:mclean@NRL-CSS tarski and meaning, again
C00767 00132 ∂28-Nov-83 0730 @MIT-MC:mclean@NRL-CSS tarski on meaning, again
C00771 00133 ∂28-Nov-83 0741 @MIT-MC:DAM%MIT-OZ@MIT-MC Model Theoretic Ontologies
C00774 00134 ∂28-Nov-83 0919 KJB@SRI-AI.ARPA Press Release
C00775 00135 ∂28-Nov-83 0930 @MIT-MC:Batali%MIT-OZ@MIT-MC Model Theoretic Ontologies
C00780 00136 ∂28-Nov-83 1001 @SRI-AI.ARPA:donahue.pa@PARC-MAXC.ARPA 1:30 Tues. Nov. 29: Computing Seminar: Luca Cardelli (Bell
C00784 00137 ∂28-Nov-83 1051 @SRI-AI.ARPA:GOGUEN@SRI-CSL this week's rewrite seminar
C00787 00138 ∂28-Nov-83 1145 @MIT-MC:crummer@AEROSPACE Autopoiesis and Self-Referential Systems
C00789 00139 ∂28-Nov-83 1307 ALMOG@SRI-AI.ARPA Reminder on why context wont go away
C00791 00140 ∂28-Nov-83 1322 @SRI-AI.ARPA:TW@SU-AI Abstract for Talkware seminar Wed - Amy Lansky
C00795 00141 ∂28-Nov-83 1351 @MIT-MC:Tong.PA@PARC-MAXC Re: Autopoiesis and Self-Referential Systems
C00798 00142 ∂28-Nov-83 1357 LAWS@SRI-AI.ARPA AIList Digest V1 #104
C00821 00143 ∂28-Nov-83 1356 ELYSE@SU-SCORE.ARPA Faculty Meeting
C00822 00144 ∂28-Nov-83 1405 @MIT-MC:GAVAN%MIT-OZ@MIT-MC Autopoiesis and Self-Referential Systems
C00827 00145 ∂28-Nov-83 1441 TAJNAI@SU-SCORE.ARPA LOTS OF FOOD at IBM Reception
C00829 00146 ∂28-Nov-83 1450 SCHMIDT@SUMEX-AIM.ARPA Symbolics Christmas gathering
C00831 00147 ∂28-Nov-83 1511 RPERRAULT@SRI-AI.ARPA meeting this week
C00832 00148 ∂28-Nov-83 1605 ELYSE@SU-SCORE.ARPA Faculty Meeting\
C00833 00149 ∂28-Nov-83 1801 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
C00839 00150 ∂29-Nov-83 0155 LAWS@SRI-AI.ARPA AIList Digest V1 #105
C00862 00151 ∂29-Nov-83 0830 EMMA@SRI-AI.ARPA recycling bin
C00864 00152 ∂29-Nov-83 1122 GOLUB@SU-SCORE.ARPA lunch
C00865 00153 ∂29-Nov-83 1128 GOLUB@SU-SCORE.ARPA IBM message
C00870 00154 ∂29-Nov-83 1251 @MIT-MC:DAM%MIT-OZ@MIT-MC Model Theoretic Ontologies
C00873 00155 ∂29-Nov-83 1315 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Dec. 1
C00876 00156 ∂29-Nov-83 1344 GROSZ@SRI-AI.ARPA important meeting Thursday at 1
C00878 00157 ∂29-Nov-83 1434 RIGGS@SRI-AI.ARPA Dec. 1 A and B Project Meeting Time
C00879 00158 ∂29-Nov-83 1603 ELYSE@SU-SCORE.ARPA Reminder
C00880 00159 ∂29-Nov-83 1837 LAWS@SRI-AI.ARPA AIList Digest V1 #106
C00912 00160 ∂29-Nov-83 2220 GOLUB@SU-SCORE.ARPA IBM meeting
C00917 00161 ∂30-Nov-83 0817 KJB@SRI-AI.ARPA Burstall's visit
C00918 00162 ∂30-Nov-83 0941 @SRI-AI.ARPA:BrianSmith.pa@PARC-MAXC.ARPA Area C Meeting with Rod Burstall
C00920 00163 ∂30-Nov-83 1030 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA Next week's colloquium
C00921 00164 ∂30-Nov-83 1123 TAJNAI@SU-SCORE.ARPA Call for Bell Fellowship Nominations
C00923 00165 ∂30-Nov-83 1131 GROSZ@SRI-AI.ARPA A&B meeting postponed
C00924 00166 ∂30-Nov-83 1435 PATASHNIK@SU-SCORE.ARPA phone number for prospective applicants
C00926 00167 ∂30-Nov-83 1647 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
C00929 00168 ∂30-Nov-83 1657 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: tarski on meaning, again
C00932 00169 ∂30-Nov-83 1706 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: Model Theoretic Ontologies
C00937 00170 ∂30-Nov-83 1726 KJB@SRI-AI.ARPA Tomorrow a.m
C00938 00171 ∂30-Nov-83 2011 @MIT-ML:crummer@AEROSPACE Model Theoretic Ontologies
C00942 00172 ∂30-Nov-83 2316 JRP@SRI-AI.ARPA Outsiders and Insiders
C00953 00173 ∂01-Dec-83 0224 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #56
C00967 00174 ∂01-Dec-83 0846 EMMA@SRI-AI.ARPA rooms
C00969 00175 ∂01-Dec-83 0851 DKANERVA@SRI-AI.ARPA Newsletter No. 11, December 1, 1983
C00989 00176 ∂01-Dec-83 0905 KJB@SRI-AI.ARPA Your memo
C00991 00177 ∂01-Dec-83 1143 GOLUB@SU-SCORE.ARPA Next meeting
C00992 00178 ∂01-Dec-83 1430 GOLUB@SU-SCORE.ARPA Consulting
C00993 00179 ∂01-Dec-83 1555 @SU-SCORE.ARPA:reid@Glacier official rumor
C00995 00180 ∂01-Dec-83 1656 GOLUB@SU-SCORE.ARPA course scheduling
C00997 00181 ∂01-Dec-83 1714 BMACKEN@SRI-AI.ARPA Staff meeting times
C00999 00182 ∂01-Dec-83 1803 @SU-SCORE.ARPA:lantz@diablo Re: course scheduling
C01001 00183 ∂02-Dec-83 0153 LAWS@SRI-AI.ARPA AIList Digest V1 #107
C01025 00184 ∂02-Dec-83 0947 KJB@SRI-AI.ARPA ARea C meeting with Burstall
C01026 00185 ∂02-Dec-83 1115 @SU-SCORE.ARPA:ullman@diablo Computer Use Committee
C01029 00186 ∂02-Dec-83 1342 GOLUB@SU-SCORE.ARPA Help needed
C01030 00187 ∂02-Dec-83 1403 GOLUB@SU-SCORE.ARPA Vote for Consulting Professors
C01033 00188 ∂02-Dec-83 2044 LAWS@SRI-AI.ARPA AIList Digest V1 #108
C01063 00189 ∂04-Dec-83 0908 @SU-SCORE.ARPA:uucp@Shasta Re: official rumor
C01066 00190 ∂04-Dec-83 1748 PPH Course Anouncement - SWOPSI 160
C01068 00191 ∂05-Dec-83 0250 LAWS@SRI-AI.ARPA AIList Digest V1 #109
C01089 00192 ∂05-Dec-83 0802 LAWS@SRI-AI.ARPA AIList Digest V1 #109
C01110 00193 ∂05-Dec-83 1022 KJB@SRI-AI.ARPA This Thursday
C01112 00194 ∂05-Dec-83 1255 @MIT-MC:MINSKY%MIT-OZ@MIT-MC
C01113 00195 ∂05-Dec-83 1332 ALMOG@SRI-AI.ARPA Reminder on why context wont go away
C01116 00196 ∂05-Dec-83 1342 @MIT-MC:MINSKY%MIT-OZ@MIT-MC
C01117 00197 ∂05-Dec-83 1442 LENAT@SU-SCORE.ARPA topic for lunch discussion
C01119 00198 ∂05-Dec-83 1529 GOLUB@SU-SCORE.ARPA Absence
C01120 00199 ∂05-Dec-83 1533 GOLUB@SU-SCORE.ARPA Meeting
C01121 00200 ∂05-Dec-83 1556 JF@SU-SCORE.ARPA Bell Fellowship
C01124 00201 ∂05-Dec-83 1606 @SU-SCORE.ARPA:WIEDERHOLD@SUMEX-AIM.ARPA Re: Bell Fellowship
C01126 00202 ∂05-Dec-83 1628 TAJNAI@SU-SCORE.ARPA Re: Bell Fellowship
C01127 00203 ∂05-Dec-83 1745 @SU-SCORE.ARPA:GENESERETH@SUMEX-AIM.ARPA Re: Call for Bell Fellowship Nominations
C01129 00204 ∂05-Dec-83 2150 KJB@SRI-AI.ARPA Conditionals Symposium
C01130 00205 ∂05-Dec-83 2301 KJB@SRI-AI.ARPA December 15
C01135 00206 ∂05-Dec-83 2336 @SU-SCORE.ARPA:uucp@Shasta Re: Call for Bell Fellowship Nominations
C01138 00207 ∂06-Dec-83 0040 @SRI-AI.ARPA:PULLUM%HP-HULK.HP-Labs@Rand-Relay WCCFL DEADLINE
C01140 00208 ∂06-Dec-83 0826 TAJNAI@SU-SCORE.ARPA Re: Call for Bell Fellowship Nominations
C01142 00209 ∂06-Dec-83 0905 PETERS@SRI-AI.ARPA Talk Wednesday
C01143 00210 ∂06-Dec-83 1246 SCHMIDT@SUMEX-AIM.ARPA IMPORTANT LM-3600 WARNING
C01147 00211 ∂06-Dec-83 1618 GOLUB@SU-SCORE.ARPA vote on consulting professors
C01148 00212 ∂06-Dec-83 1654 EMMA@SRI-AI.ARPA PARTY
C01151 00213 ∂06-Dec-83 1656 BRODER@SU-SCORE.ARPA Last AFLB of 1983
C01153 00214 ∂07-Dec-83 0058 LAWS@SRI-AI.ARPA AIList Digest V1 #110
C01186 00215 ∂07-Dec-83 1406 DKANERVA@SRI-AI.ARPA Room change for Thursday Conditionals Symposium
C01188 00216 ∂07-Dec-83 1917 DKANERVA@SRI-AI.ARPA Newsletter No. 12, December 8, 1983
C01211 00217 ∂08-Dec-83 0713 KJB@SRI-AI.ARPA A.S.L.
C01213 00218 ∂08-Dec-83 1227 YAO@SU-SCORE.ARPA Library hours
C01215 00219 ∂08-Dec-83 1300 LIBRARY@SU-SCORE.ARPA Speed Processed Books with call #'s like 83-001326
C01217 00220 ∂08-Dec-83 1640 WUNDERMAN@SRI-AI.ARPA Friday Phone Calls to Ventura
C01219 00221 ∂08-Dec-83 1707 RIGGS@SRI-AI.ARPA SPECIAL MONDAY TALK
C01221 00222 ∂08-Dec-83 2047 @MIT-MC:RICKL%MIT-OZ@MIT-MC Model Theoretic Ontologies
C01224 00223 ∂08-Dec-83 2056 @MIT-MC:RICKL%MIT-OZ@MIT-MC Model Theoretic Ontologies
C01228 00224 ∂09-Dec-83 1048 TAJNAI@SU-SCORE.ARPA Computer Forum dates
C01229 00225 ∂09-Dec-83 1156 EMMA@SRI-AI.ARPA CSLI Directory
C01232 00226 ∂09-Dec-83 1159 KJB@SRI-AI.ARPA New members of CSLI
C01234 00227 ∂09-Dec-83 1205 KJB@SRI-AI.ARPA Party
C01236 00228 ∂09-Dec-83 1316 ULLMAN@SU-SCORE.ARPA CIS building
C01237 00229 ∂09-Dec-83 1538 RIGGS@SRI-AI.ARPA Talk by Stalnacker
C01239 00230 ∂10-Dec-83 0822 KJB@SRI-AI.ARPA Sigh
C01241 00231 ∂10-Dec-83 1902 LAWS@SRI-AI.ARPA AIList Digest V1 #111
C01267 00232 ∂12-Dec-83 0912 RIGGS@SRI-AI.ARPA GARDENFORS TALK CANCELLED
C01268 00233 ∂12-Dec-83 1126 TAJNAI@SU-SCORE.ARPA Bell Nominations
C01271 00234 ∂12-Dec-83 1135 TAJNAI@SU-SCORE.ARPA Re: Bell Nominations
C01273 00235 ∂12-Dec-83 1208 RIGGS@SRI-AI.ARPA GARDENFORS TALK IS UNCANCELLED
C01274 00236 ∂12-Dec-83 1216 TAJNAI@SU-SCORE.ARPA Re: Bell Nominations
C01276 00237 ∂12-Dec-83 1422 TAJNAI@SU-SCORE.ARPA
C01278 00238 ∂12-Dec-83 1405 TAJNAI@SU-SCORE.ARPA Bell Nominations
C01282 00239 ∂12-Dec-83 1706 TAJNAI@SU-SCORE.ARPA Update on Bell Nominations
C01288 00240 ∂13-Dec-83 1055 EMMA@SRI-AI.ARPA Holiday Potluck Party (reminder)
C01290 00241 ∂13-Dec-83 1349 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Dec. 15th
C01293 00242 ∂13-Dec-83 2134 KJB@SRI-AI.ARPA Reorganization
C01299 00243 ∂14-Dec-83 1220 YAO@SU-SCORE.ARPA [C.S./Math Library <LIBRARY@SU-SCORE.ARPA>: Math/CS Library Hours]
C01302 00244 ∂14-Dec-83 1401 @SU-SCORE.ARPA:CAB@SU-AI CSD Colloquium
C01303 00245 ∂14-Dec-83 1459 LAWS@SRI-AI.ARPA AIList Digest V1 #112
C01332 00246 ∂15-Dec-83 0249 @SU-SCORE.ARPA:ROD@SU-AI CSD Colloquium
C01333 00247 ∂15-Dec-83 0858 DKANERVA@SRI-AI.ARPA newsletter No. 13, December 15, 1983
C01359 00248 ∂15-Dec-83 0906 KJB@SRI-AI.ARPA Next quarter's schedule
C01360 00249 ∂15-Dec-83 0912 KJB@SRI-AI.ARPA Holiday greetings to us from Bell Labs
C01361 00250 ∂15-Dec-83 2118 @SU-SCORE.ARPA:CMILLER@SUMEX-AIM.ARPA [Carole Miller <CMILLER@SUMEX-AIM.ARPA>: HPP OPEN HOUSE - 12/15]
C01364 00251 ∂16-Dec-83 0810 @SU-SCORE.ARPA:uucp@Shasta Re: Update on Bell Nominations
C01366 00252 ∂16-Dec-83 0818 @SU-SCORE.ARPA:reid@Glacier Re: Update on Bell Nominations
C01368 00253 ∂16-Dec-83 0827 WUNDERMAN@SRI-AI.ARPA Friday morning staff meetings at Ventura
C01369 00254 ∂16-Dec-83 1128 WILKINS@SRI-AI.ARPA Prof. Cohn's response to CSLI
C01370 00255 ∂16-Dec-83 1327 LAWS@SRI-AI.ARPA AIList Digest V1 #113
C01394 00256 ∂18-Dec-83 1526 LAWS@SRI-AI.ARPA AIList Digest V1 #114
C01422 00257 ∂19-Dec-83 0912 KJB@SRI-AI.ARPA reminder
C01424 00258 ∂19-Dec-83 1124 EMMA@SRI-AI.ARPA directory
C01428 00259 ∂19-Dec-83 1736 BMOORE@SRI-AI.ARPA soliciting postdoc applications
C01430 00260 ∂20-Dec-83 1349 GROSZ@SRI-AI.ARPA Visitor: David Israel, BBN
C01432 00261 ∂20-Dec-83 1739 BMOORE@SRI-AI.ARPA long term visitors
C01434 00262 ∂21-Dec-83 0613 LAWS@SRI-AI.ARPA AIList Digest V1 #115
C01454 00263 ∂21-Dec-83 1017 BMOORE@SRI-AI.ARPA Re: soliciting postdoc applications
C01456 00264 ∂22-Dec-83 2213 LAWS@SRI-AI.ARPA AIList Digest V1 #116
C01473 00265 ∂27-Dec-83 1354 ELYSE@SU-SCORE.ARPA Christensen Fellowships for Senior Faculty at St. Catherine's
C01477 00266 ∂27-Dec-83 1822 GOLUB@SU-SCORE.ARPA Faculty meeting
C01478 00267 ∂27-Dec-83 1825 GOLUB@SU-SCORE.ARPA Senior Faculty Meeting
C01479 00268 ∂28-Dec-83 1054 EMMA@SRI-AI.ARPA recycling
C01480 00269 ∂28-Dec-83 1235 KJB@SRI-AI.ARPA Claire
C01481 00270 ∂28-Dec-83 1237 BMOORE@SRI-AI.ARPA Jeremy William Moore
C01482 00271 ∂29-Dec-83 1034 EMMA@SRI-AI.ARPA Directory
C01485 00272 ∂30-Dec-83 0322 LAWS@SRI-AI.ARPA AIList Digest V1 #117
C01512 00273
C01517 00274 ∂01-Jan-84 1731 @MIT-MC:crummer@AEROSPACE Autopoietic Systems
C01530 00275 ∂01-Jan-84 1744 @MIT-MC:crummer@AEROSPACE Autopoietic Systems
C01543 00276 ∂01-Jan-84 1753 @MIT-MC:crummer@AEROSPACE Autopoietic Systems
C01556 00277 ∂03-Jan-84 1129 SCHMIDT@SUMEX-AIM.ARPA HPP dolphins & 3600's unavailable Jan 4 (tomorrow)
C01558 00278 ∂03-Jan-84 1823 LAWS@SRI-AI.ARPA AIList Digest V2 #1
C01575 00279 ∂04-Jan-84 1020 DKANERVA@SRI-AI.ARPA Newsletter will resume January 12, 1984
C01576 00280 ∂04-Jan-84 1139 STAN@SRI-AI.ARPA Foundations Seminar
C01579 00281 ∂04-Jan-84 1157 @SU-SCORE.ARPA:TW@SU-AI Santa Cruz
C01580 00282 ∂04-Jan-84 1817 GOLUB@SU-SCORE.ARPA Yet more on charging for the Dover
C01583 00283 ∂04-Jan-84 2049 LAWS@SRI-AI.ARPA AIList Digest V2 #2
C01614 00284 ∂05-Jan-84 0940 DFH SPECIAL SEMINAR
C01618 00285 ∂05-Jan-84 0951 @SRI-AI.ARPA:DFH@SU-AI SPECIAL SEMINAR
C01622 00286 ∂05-Jan-84 1218 GOLUB@SU-SCORE.ARPA Faculty meeting
C01623 00287 ∂05-Jan-84 1502 LAWS@SRI-AI.ARPA AIList Digest V2 #3
C01652 00288 ∂05-Jan-84 1629 SCHMIDT@SUMEX-AIM.ARPA 3600 inventory grows
C01654 00289 ∂05-Jan-84 1824 ALMOG@SRI-AI.ARPA Seminar on why DISCOURSE wont go away
C01661 00290 ∂05-Jan-84 1939 LAWS@SRI-AI.ARPA AIList Digest V2 #4
C01688 00291 ∂06-Jan-84 0833 RIGGS@SRI-AI.ARPA STAFF WHERABOUTS JAN 6
C01689 00292 ∂06-Jan-84 1141 ETCHEMENDY@SRI-AI.ARPA Visitor
C01691 00293 ∂06-Jan-84 1429 ELYSE@SU-SCORE.ARPA Forsythe Lectures
C01693 00294 ∂06-Jan-84 1502 JF@SU-SCORE.ARPA computational number theory
C01695 00295 ∂06-Jan-84 1524 GOLUB@SU-SCORE.ARPA AGENDA
C01697 00296 ∂06-Jan-84 1831 GOLUB@SU-SCORE.ARPA Faculty lunch
C01698 00297 ∂06-Jan-84 1931 WUNDERMAN@SRI-AI.ARPA Mitch Waldrop's Visit
C01700 00298 ∂07-Jan-84 0229 RESTIVO@SU-SCORE.ARPA PROLOG Digest V2 #1
C01717 00299 ∂07-Jan-84 1726 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
C01721 00300 ∂09-Jan-84 1215 PATASHNIK@SU-SCORE.ARPA student meeting
C01723 00301 ∂09-Jan-84 1414 MOLENDER@SRI-AI.ARPA Talk on ALICE, 1/23, 4:30pm, EK242
C01726 00302 ∂09-Jan-84 1455 KJB@SRI-AI.ARPA Afternoon seminar for Winter Quarter
C01728 00303 ∂09-Jan-84 1543 REGES@SU-SCORE.ARPA TA's for Winter
C01730 00304 ∂09-Jan-84 1629 ALMOG@SRI-AI.ARPA reminder on WHY DISCOURSE WONT GO AWAY
C01733 00305 ∂09-Jan-84 1641 LAWS@SRI-AI.ARPA AIList Digest V2 #5
C01748 ENDMK
C⊗;
∂11-Nov-83 0858 BMACKEN@SRI-AI.ARPA Visit of CSLI Advisory Panel
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Nov 83 08:58:31 PST
Date: Fri 11 Nov 83 08:54:15-PST
From: BMACKEN@SRI-AI.ARPA
Subject: Visit of CSLI Advisory Panel
To: csli-folks@SRI-AI.ARPA
A reminder that the CSLI Advisory Panel will be here next week
on Thursday, Friday, and Saturday (Nov 17 - 19). They will
attend Thursday activities with us, meet with the Executive
Committee Friday morning, and have informal meetings with
you Friday afternoon. We have planned an extended tea --
wine and cheese -- from 3:30 until 6:00 on Friday to give
everyone an opportunity for more interaction.
Regarding the Friday afternoon informal meetings, we hope
you have saved as much of that afternoon as possible. We
wanted to let each of them decide how to use that time, so
I can't say now how it will go. Instead would those of
you who haven't already done so, please let me know when
you'll be available that afternoon and where I can reach
you. That way I can help the groups connect.
Thanks.
B.
-------
∂11-Nov-83 0940 JF@SU-SCORE.ARPA fifth speaker for the 21st
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Nov 83 09:40:41 PST
Date: Fri 11 Nov 83 09:37:03-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: fifth speaker for the 21st
To: bats@SU-SCORE.ARPA
Dan Greene, from Xerox PARC, will speak at the November 21st BATS meeting
along with the other four speakers. Here is an abstract for his talk:
There is a simple representation of rooted embedded planar graphs that
uses two sets of balanced parentheses. This representation is the best
possible fixed number of bits per edge encoding, and it facilitates the
linear time drawing of planar graphs on grids. I will describe the
encoding and the drawing algorithm, and explore the properties of the
encoding that are related to dual graphs and search strategies.
I will send a schedule specifying who will talk when as soon as that is
decided.
Joan
-------
∂11-Nov-83 1401 LENAT@SU-SCORE.ARPA FORUM SCHEDULING (important)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Nov 83 14:01:28 PST
Date: Fri 11 Nov 83 13:59:13-PST
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: FORUM SCHEDULING (important)
To: faculty@SU-SCORE.ARPA
cc: tajnai@SU-SCORE.ARPA, elyse@SU-SCORE.ARPA
We are now planning the program for the Sixteenth Annual Computer
Forum Meeting scheduled for February 8/9, 1984 (Wednesday/Thursday).
We plan to mail an advance program to the Forum participants by
early December, so we'd like the following information now:
1. Please submit the names of your advisees whom you wish to speak.
Names should be sent to Carolyn Tajnai by Nov. 15.
Priority will be given in the following order:
A. Students expecting to graduate in 1984,
who have never spoken at a Forum meeting.
B. Students expecting to graduate in 1984,
who are working in a new area and wish
to speak on a different topic than before.
C. Students who have never spoken at a Forum meeting.
D. Anyone other than PhD students
2. If you submit the names of more than one student, rank them
according to your priority.
3. Both the number of speaker "slots" and the size of each slot are
roughly -- but only roughly -- decided already. Therefore, we
can't decide who will be able to speak, and how long a time they'll
be alotted, until we receive lists of possible speakers from
the faculty.
4. The follow-up session was well received last year. If you have
a student who spoke last year and he/she has new developments, let us
know so we can schedule the student in the follow-up session (10
minute update).
5. The ``Birds-of-a-Feather'' sessions were also well received last
year, but this year we will provide maps to the different conference
rooms. We again request each research group to be available in an
assigned conference room for informal discussions with the visitors.
The time will probably be from 3:15 to 4:15 on Thursday, Feb. 9.
Because we must send the schedule out in a timely fashion,
we ask that you reply within two weeks. If we haven't heard
from you by then (Nov 22) we'll assume you won't have any
students speaking at this year's meeting. We hope that this
year's policy, which is much more flexible than our past one
(exactly one slot per professor), will better meet the needs
of the faculty and the students (given the highly
variable number of eligible students each of us has each year.)
If you have any questions or suggestions, please send them to
me or to Carolyn Tajnai.
Thanks,
Doug Lenat
-------
∂11-Nov-83 1432 PETERS@SRI-AI.ARPA House sitter wanted
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Nov 83 14:31:49 PST
Date: Fri 11 Nov 83 14:17:18-PST
From: Stanley Peters <PETERS@SRI-AI.ARPA>
Subject: House sitter wanted
To: csli-friends@SRI-AI.ARPA
Would anyone who is interested in house sitting from about Dec.
17th until about Jan. 6th please contact Stanley Peters either by
phone (497-2212, 497-0939, or 328-9779), electronically at
PETERS@SRI-AI, or in person at Ventura Hall room 29?
-------
∂11-Nov-83 1437 LENAT@SU-SCORE.ARPA Invitation to my Colloq 11/15
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Nov 83 14:36:56 PST
Date: Fri 11 Nov 83 14:31:11-PST
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: Invitation to my Colloq 11/15
To: tenured-faculty@SU-SCORE.ARPA
I will be giving the CS Colloquium this Tuesday (11/15, 4:15 Terman
Aud), summarizing my research on machine learning during my years at
Stanford. Since I'm being considered for tenure this year, it might
be a good opportunity for you to find out what I've been up to. Hope
to see you there.
-- Doug Lenat
-------
∂11-Nov-83 2131 BRODER@SU-SCORE.ARPA Abstract for Kirkpatrick's talk.
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Nov 83 21:31:43 PST
Date: Fri 11 Nov 83 21:30:25-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Abstract for Kirkpatrick's talk.
To: aflb.all@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
11/17/83 - Prof. Dave Kirkpatrick (Univ. of British Columbia and IBM
San Jose):
"Determining the Separation of Convex Polyhedra"
We describe a unified approach to problems of determining the
separation (and, as a byproduct, detecting the intersection) of convex
polyhedra in two and three dimensions. Our results unify (and, in a
number of cases, improve) the best upper bounds known for problems of
this type.
As an example, we show that the separation of two (suitably
preprocessed) n vertex 3-polyhedra can be determined in O((log n)↑2)
steps. The preprocessed representation is scale and orientation
independent and can be constructed in linear time from standard
representations of polyhedra.
Algorithms for such related problems as polyhedron/subspace
intersection, collision detection, and occlusion are also discussed.
******** Time and place: Nov. 17, 12:30 pm in MJ352 (Bldg. 460) *******
-------
∂13-Nov-83 0227 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #51
Received: from SU-SCORE by SU-AI with TCP/SMTP; 13 Nov 83 02:26:51 PST
Date: Saturday, November 12, 1983 10:48PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #51
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Sunday, 13 Nov 1983 Volume 1 : Issue 51
Today's Topics:
Implementations - An Algorithmic Capability & Databases,
----------------------------------------------------------------------
Date: Mon, 7 Nov 83 11:55 EST
From: Tim Finin <Tim.UPenn@Rand-Relay>
Subject: Database Hacking Ideas In Prolog
I have heard about Prolog systems that include facilities to
partition the database and to control the sections that are
searched when satisfying goals. Does anybody know of such
systems ? ...
One idea is to organize the database into a tree of contexts.
Retrieval and assert/retract is done with respect to the current
context. This scheme was worked out by Sussman and McDermott for
CONNIVER and is also used in McDermott's DUCK language.
-- Tim
------------------------------
Date: Tue, 8 Nov 83 18:35 EST
From: Chris Moss <Moss.UPenn@Rand-Relay>
Subject: Modules, and Algorithms
As a response to Wayne Christopher, there are several Prolog systems
that include modules. I'll briefly describe one: microProlog,
available for CP/M and IBM PC machines. A module appears as 3
entities: a module name, names of relations that are defined in the
module, and names of symbols referenced in the outside world. (I.e.
export and import). Thus each module maintains its own dictionary and
modules can of course be nested, saved and edited (using unwrap and
wrap commands) and one can change contexts easily. There are a few
hacks (module names must be distinct from other names) but the system
generally works well. It does stop familiar problems like acquiring 2
copies of append from different files!
On Russ Abbott's question about Prolog for algorithms, I would
refer him to Bob Kowalski's series of papers on the subject, though
they are not completed. Processing a command involves a parameter
representing the database and a new database created by it. He
characterises an algorithm as logic+control, and several papers by
Clark and Tarnlund expand this in developing efficient programs from
inefficient specifications. Improvements in Prolog implementation
techniques, such as tail recursion, make iterative programs
reasonable. So the work is being done although it is far from
completion. The danger is that like Lisp, the available tools become
a de facto standard and prevent the 'right' solutions being used.
Refs: K.A.Bowen, R.A. Kowalski: Amalgamating Language and
Metalanguage in Logic Programming. in Logic Programming
(Clark, Tarnlund Eds). Academic Press, 1982.
R.A.Kowalski: Algorithm=Logic+Control. CACM 22, 424-431. 1979.
K.L.Clark, J. Darlington: Algorithm Classification through
Synthesis. Computer Journal 23/1. 1980.
A.Hansson, S.Haridi, S-A.Tarnlund: Properties of a Logic
Programming Language. in Logic Programming. 1982.
------------------------------
Date: Monday, 7-Nov-83 19:32:24-GMT
From: O'Keefe HPS (on ERCC DEC-10) <OKeefe.R.A. at EDXA>
Subject: Imperative Prolog, P(X..Z)
Logic programs are just as much algorithms as assembly code. So
we should distinguish not between "algorithmic" and "pure", but
between "imperative" and "applicative" uses of Prolog. That aside, I
would like to agree with Russ Abbott. [How's that for a record ?]
There is no need for us to invent an imperative version of Prolog.
Dijkstra has already done it for us. We have only to add a few new
data structures.
There are two good reasons why most Prologs don't accept terms
like P(X...Z). The first is that it is not at all clear what they
mean. 'apply' is defined in the DEC-10 Prolog library as
apply(Pred, Args) :-
atom(Pred), !,
Goal =.. [Pred|Args],
call(Goal).
apply(Pred, Args) :-
Pred =.. List,
append(List, Args, Full),
Goal =.. Full,
call(Goal).
modulo details. The point of this is to give us partial application,
which is how we get the effect of higher-order functions without the
need for any other extensions to the language. [Re dictating to the
user: neither David Warren nor I said to the user "thou shalt not use
higher-order code". His paper pointed out "you've already got the
tools you need, go to it."] So we very often want P to have some of
its arguments already filled in. Since
apply(p, [a,b,c])
apply(p(a), [b,c])
apply(p(a,b), [c])
apply(p(a,b,c), [])
all have exactly the same effect (modulo calls to append), we would like
P1 = p, P1(a,b,c)
P2 = p(a), P2(b,c)
P3 = p(a,b), P3(c)
P4 = p(a,b,c) P4
to have the same effect. Since the terms in the right hand column are
all supposed to be the same goal, we would like them to unify. That's
not too hard to arrange for P1(a,b,c) and P4, but rather harder for
the others. Before anyone points out that p(a,b,c) and
apply(p,[a,b,c]) don't unify either, let me point out that
apply(p,[a,b,c]) makes no pretence of BEING the goal p(a,b,c), but
only claims to CALL it.
Still on logical difficulties, if P(X...Z) is accepted as a term,
one might expect to match it against something in order to bind P.
Again, we already have
P4 = p(a,b,c) working, and it wouldn't be too hard to make
P1(a,b,c) = p(a,b,c)
work. Indeed, you can tell Poplog that you want this to happen. But
P2 and P3 give you trouble. Restricting "predicate variables" to take
on atomic values only just will not do: my experience has been that I
have wanted to pass around a goal with some of its arguments filled in
much more often than I have wanted to pass a goal with none of its
arguments specified. For example, to add N to all the elements in a
list, one might write
... maplist(plus(N), L, Incremented), ...
Poplog permits P$[X,...,Z], and {SU-SCORE}PS:<Prolog>Read.Pl
permits P(X,...,Z) -- against my better judgement -- as notational
variants for apply(P, [X,...,Z]). The Poplog form, where $ is just an
infix 'apply', is a lot cleaner. Of course neither of these makes
much sense except as a goal, and when P is bound to 'p' P$[a,b,c]
claims to be the goal p$[a,b,c] not the goal p(a,b,c).
The second good reason is: how are such terms to be stored ?
Many Prologs use a very space-efficient scheme where f(X1,...,Xn) is
represented by n+1 consecutive pointers. The 1st .. nth pointers are
or point to the arguments, while the 0th pointer points to a "functor
block" representing f/n, so that there is no need to store the arity
with the term. This might be thought to lead to more page references,
but there is an obvious little trick to reduce the impact of that.
The representation used by Poplog is {X1 ... Xn f}, which turns out to
require at least two more words per term. Since most terms are fairly
short, this is a substantial overhead. Poplog represents Prolog lists
as Pop pairs which makes the space cost the same for both methods, but
the price is a time cost on *everything* that looks at terms, even
when they aren't lists. The Poplog system can however handle P4 as
well as P1. Prologs embedded in Lisp commonly represent Prolog terms
as lists. With CDR-coding, that is as efficient as Poplog.
Presumably LM-Prolog does this. If you lack CDR-coding, as
micro-PROLOG does, the result is that handling P(X...Z) as (P X ... Z)
is trivial, indeed it is almost inescapable, but that the space cost
is appalling.
Of course a mixed representation could be used, with most terms
being represented efficiently as <↑f/n, X1, ..., Xn>, and P(X1..Xn)
terms being handled as '$$'(P,X1,...,Xn). We could arrange that if
unification of two compound terms failed, it would check for either or
both of the terms being '$$', and if so would flatten them, E.g.
'$$'(p(a,b),c) => p(a,b,c), and try again. We would also arrange to
have machine code definitions for all '$$'/N (whenever a clause is
asserted for F/N, ensure that $$/1 .. $$/N+1 are all defined) with the
effect of
'$$'(Pred, A1, ..., An) :-
call Pred with extra args A1 .. An.
where of course Pred might be another $$ term.
Now I would certainly agree that having a family of predicates
apply(p(X1,..,Xk), Xk+1, ..., Xn) :-
p(X1, ..., Xk, Xk+1, ..., Xn)
efficiently implemented in machine code so that all these wretched
lists weren't created only to be immediately thrown away would be a
good thing. But I cannot see why writing
apply(P, X, Y, Z) or P$[X, Y, Z]
is thought to be an outrageous demand on the programmer, and
P(X, Y, Z)
is thought to be so much more attractive. Whether Prolog **ought** to
make a syntactic distinction between goals and terms is not yet at
issue, the fact is that is *doesn't*, and if we are to adopt a
notation for one use, it ought to make sense in the other.. The
implementation cost of making P2(b,c) = P3(c) work is far too high for
my liking, and the increased complexity in the language in either case
is unacceptable.
Here is a quotation from Hoare, "Hints on Programming Language
Design", which explains my attitude better than I can.
A necessary condition ... is the utmost simplicity in the design
of the language. ... But the main beneficiary of simplicity is the
user of the language. In all spheres of human intellectual and
practical activity, from carpentry to golf, from sculpture to space
travel, the true craftsman is the one who thoroughly understands his
tools. And this applies to programmers too. ...
It therefore seems especially necessary in the design of a new
programming language ... to pursue the goal of simplicity to an
extreme, so that a programmer can readily learn and remember all its
features, can select the best facility for each of his purposes, can
fully understand the effects and consequences of each decision, and
can then concentrate the major part of his intellectual effort to
understanding his problem and his programs rather than his tool.
[This is the paper where he says of Fortran "The standardizers have
maintained the horrors of early implementations, and have resolutely
set their face against the advance of language design technology, and
have thereby saved it from many later horrors." I wonder what he'd
say now of Fortran 8x?! Save Prolog from horrors!]
------------------------------
End of PROLOG Digest
********************
∂13-Nov-83 1536 @SRI-AI.ARPA:vardi%SU-HNV.ARPA@SU-SCORE.ARPA Knowledge Seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Nov 83 15:36:21 PST
Received: from SU-SCORE.ARPA by SRI-AI.ARPA with TCP; Sun 13 Nov 83 15:34:49-PST
Received: from Diablo by Score with Pup; Sun 13 Nov 83 15:32:27-PST
Date: Sun, 13 Nov 83 15:28 PST
From: Moshe Vardi <vardi%Diablo@SU-Score>
Subject: Knowledge Seminar
To: knowledge@Diablo
A public mailing list has been established for the Knowledge Seminar. All
the people that asked to be on it are there. To add yourself to the list
send mailer@su-hnv the message "add knowledge". To remove yourself for the
list send the message "delete knowledge".
If you are on the list, you should receive this message.
Moshe
∂13-Nov-83 1708 ALMOG@SRI-AI.ARPA reminder on why context wont go away
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Nov 83 17:07:41 PST
Date: 13 Nov 1983 1702-PST
From: Almog at SRI-AI
Subject: reminder on why context wont go away
To: csli-friends at SRI-AI
cc: grosz, alomg
On Tuesday 11.15.83 we have our seventh meeting. The speaker
is J.Hobbs from SRI. Next time the speaker will be J.Moravcsik
I attach the abstract of Hobbs' talk.(n.b. meeetings at Ventura
3.15 pm).
Context Dependence in Interpretation
Jerry R. Hobbs
SRI International
It is a commonplace in AI that the interpretation of utterances
is thoroughly context-dependent. I will present a framework for
investigating the processes of discourse interpretation, that allows
one to analyze the various influences of context. In this framework,
differences in context are reflected in differences in the structure
of the hearer's knowledge base, in what knowledge he believes he shares
with the speaker, and in his theory of what is going on in the world.
It will be shown how each of these factors can affect aspects of the
intepretation of an utterance, e.g., how a definite reference is
resolved.
-------
-------
∂14-Nov-83 0222 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #52
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Nov 83 02:22:46 PST
Date: Sunday, November 13, 1983 11:14PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #52
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Monday, 14 Nov 1983 Volume 1 : Issue 52
Today's Topics:
Programs - Uncertainties & Searching,
LP Library - Request & Update,
Implementations - Databases
----------------------------------------------------------------------
Date: Fri, 11 Nov 83 09:24:57 PST
From: Koenraad Lecot <Koen@UCLA-CS>
Subject: Logic Programs with Uncertainties
Prerequisites: Paper, Shapiro IJCAI-83
Prospector - Probabilistic RQ%}easoning
Hi everybody,
I hope some of you are familiar with the paper by Ehud Shapiro on
Logic Programs With Uncertainties that appeared in the Proc. of
IJCAI-83. I encountered a few problems when trying to use his
interpreter for Prospector. His interpreter:
solve(true,[]).
solve((A,B),[X|Y]) :- solve(A,X),solve(B)
solve(A,F(S)) :- his←clause(A,B,F),solve(B,S).
F is called a certainty function, is monotone increasing and maps
a list into some numeric value in [0,1]. Of course, when you read
his paper it's all clean and clear. The question is, how useful
is it for existing expert systems ?
One feature Prospector and Mycin have in common is that all evidence
for a particular hypothesis is collected before any conclusion is
made. In Prospector, for example, there are two ways of combining
evidence:
1. logical combinations
2. multiple pieces of evidence
If a hypothesis has multiple pieces of evidence, each will influence
the probability of the hypothesis independently of the other. Note
that Prolog needs only one piece of evidence. On the other hand, the
antecedent of an inference rule may also be a logical combination of
evidences using the logical operators AND,OR and NOT. We note that
Prospector makes a difference between
H <- E1
H <- E2
and
H <- E1 OR E2
where Prolog does not. The probabilities of logical combinations are
simple fuzzy set formulas: P(A AND B ) = min {P(A),P(B)}
P(A OR B ) = max {(P(A),P(B)}
P(NOT A) = 1 - P(A)
The probability of a hypothesis with multiple evidence is defined
as some expression P(H|E') = product of the likelihood ratio for
each evidence.
A problem occurs when trying to apply Shapiro's method to Prospector.
The question is how to deal with multiple evidence. My solution is to
change his interpreter into something like below:
% we assume that all prior probabilities where defined by the domain
% expert the problem is to compute posterior probabilities
solve((A,true),V) :- solve(A,V).
solve((A,B),V) :- solve(A,V),V1),solve(B,V2),min(V1,V2,V).
solve((A;B),V) :- solve(A,V),V1),solve(B,V2),max(V1,V2,V).
solve(A,V) :- rule←head(A),setof0(B,clause(A,B),Bodies),
solve←list(Bodies,List),compute(A,List,Value).
solve(A,V) :- fact(A), ask the user for his estimate or use the
prior probability
solve←list([],[]).
solve←list([H|T],[VH|VT]) :- solve(H,VH),
solve←list(T,VT).
rule←head(A) and fact(A) are defined on the knowledge base which is
stored on a separate file. this file is consulted using a special
"consult" that keeps track of the database references. I should
note here that Peter Hammond did basically the same thing for his
Mycin in Prolog. ( AS - Imperial College - 1981 )
The question for me is: are we still within the semantics of
Logic Programs with Uncertainties as Shapiro defines them ? Does
it matter ? Shapiro does not mention multiple evidence in his paper
as this is not pure Prolog. Has anybody a cleaner solution ? All
comments are welcome.
Thanks,
-- Koenraad Lecot
P.S.: I am not defending Prospector's way of handling uncertainty.
I only tried to use Shapiro's interpreter.
------------------------------
Date: Thursday, 10-Nov-83 18:32:29-GMT
From: O'Keefe HPS (on ERCC DEC-10) <OKeefe.R.A. at EDXA>
Subject: Request For Utilities
There are lots of Prolog utility files in the <Prolog>
directory at {SU-SCORE}. With the exception of Barrow's
pretty-printer for terms (and maybe one or two others that
I overlooked, sorry if so) all of them are from Edinburgh
and the only contributions since they were first mounted have
been from me.
I think the Edinburgh library has some really good stuff
in it, but I do *not* think it adequate for all needs. There
must be LOTS of useful little files kicking around that could
be cleaned up and submitted to the {SU-SCORE} library. If you
have a little program that simplifies propositional formulae,
clean it up and send it in ! Someone may have a use for it.
If you've got an implementation of the SUP-INF method, send
it in ! If you've got a program that checks for style errors
in Prolog, send it in !
The source code you send should be tested, and it should
be possible for someone else to figure out what it's for in
under half an hour, but apart from that anything *useful* goes.
Files in Edinburgh syntax would probably be best, but if you
have something neat you've held back because it's in Waterloo
Prolog, send it in ! Some Waterloo Prolog users read this Digest.
Translations of existing utilities into other dialects would
also be a good idea.
One little point: complete programs that hack the data
base are ok, but "library routines" that people might want
to include in their programs should avoid hacking the data
base if at all possible (not because data base is evil, but
to avoid conflicts with the user's data base hacking), or if
Ait is not possible to avoid hacking the data base, the
predicates hacked should be listed near the top of the file
and highlighted in some fashion so that the user knows what
he is risking.
Due to transmission delay, my reply to Uday Reddy's
first message appeared after the second one. My qualified
agreement with the first message does not apply to the second.
------------------------------
Date: Sat 12 Nov 83 17:30:33-PST
From: Chuck Restivo <Restivo@SU-SCORE>
Subject: LP Library Update
With thanks to Richard O'Keefe the following routines have been
added to the LP Library,
Assoc.Pl Purpose: Binary tree implementation of
``association lists''
Arrays.Pl Purpose: Updatable arrays
Trees.Pl Purpose: Updatable binary trees
A corrected version of Read←Sent.Pl has also been installed.
These are available on the <Prolog> directory at {SU-SCORE}, for
those readers who cannot access SCORE, I have a limited number of
hard copies that could be mailed.
-- ed
------------------------------
Date: Sun, 13 Nov 83 10:36:04 pst
From: Wayne A. Christopher <Faustus%UCBernie@Berkeley>
Subject: Breadth First Searching and Databases
Is there a simple way to do breadth-first searching in Prolog ?
As a concrete example, say I want to write a predicate
equiv(Expr1, Expr2) where Expr2 is to be a mathematical expression
equivalent to Expr1. (E.g., Expr1 = x + y, Expr2 = 1 * (x + y)) If
you try to implement this as a set of rewriting rules, you seem to
always get stuck in infinite loops. Is this problem fundamental to
the theoretical basis of Prolog, or is there some easy and/or
natural way to, in the example above for instance, get all
possible expressions enumerated with the shorter expressions
first ?
As for the modular database question, I must confess that I did
not consider very carefully what sort of "extra control" it
would provide. From the responses to this, I gather that among
the Prologs that currently support this, the major adventages
consist of differing contexts and subcontexts for evaluation. In
the same area, are there any implementations that support
multiple stacks and substacks ?
-- Wayne Christopher
------------------------------
End of PROLOG Digest
********************
∂14-Nov-83 1152 ELYSE@SU-SCORE.ARPA Overseas Studies Centers
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Nov 83 11:52:15 PST
Date: Mon 14 Nov 83 11:50:12-PST
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Overseas Studies Centers
To: Faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
Dear Colleagues:
There will be an announcement in the Campus Report that applications for
teaching assignments at Stanford's Overseas Studies Centers during 1985-1986
are now available in the Overseas Studies Office. The application deadline
is December 20, 1983. Basic information about the application process is
given on a flyer in my office. Application forms and further information can be obtained by contacting Daryl Sawyer, in the office of Overseas Studies, Room
112, Old Union.
Gene.
-------
∂14-Nov-83 1324 ELYSE@SU-SCORE.ARPA Annual Faculty Reports
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Nov 83 13:24:17 PST
Date: Mon 14 Nov 83 13:23:03-PST
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Annual Faculty Reports
To: faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
I have just received the annual faculty reports from H&S. I will have them
in your mailbox or in the ID mail today. These need to be sent back to H&S
by Dec. 10 so please return them to me at that time. Do not send them to
H&S yourself. I need to review them. Thanks, Elyse.
-------
∂14-Nov-83 1446 MWALKER@SU-SCORE.ARPA Professor Random
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Nov 83 14:46:21 PST
Date: Mon 14 Nov 83 14:39:35-PST
From: Marilynn Walker <MWALKER@SU-SCORE.ARPA>
Subject: Professor Random
To: CSD-Faculty: ;
Dear Faculty:
I desperately need a volunteer to sit in on Kurt Konolige's oral exam
tomorrow, Nov. 15th at 2:30 p.m. The title of his oral is "A Deductive
Model of Belief". I know everyone is busy, but much appreciation if
you can help me out.
Marilynn
-------
∂14-Nov-83 1702 LAWS@SRI-AI.ARPA AIList Digest V1 #97
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Nov 83 16:59:41 PST
Date: Monday, November 14, 1983 8:59AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #97
To: AIList@SRI-AI
AIList Digest Monday, 14 Nov 1983 Volume 1 : Issue 97
Today's Topics:
Pattern Recognition - Vector Fields,
Psychology - Defense,
Ethics - AI Responsibilities,
Seminars - NRL & Logic Specifications & Deductive Belief
----------------------------------------------------------------------
Date: Sun, 13 Nov 83 19:25:40 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: Need references in field of spatial pattern recognition
This letter to AI-LIST is a request for references from all
of you out there that are heavily into spatial pattern recognition.
First let me explain my approach, then I'll hit you with my
request. Optical flow and linear contrast edges have been getting a
lot of attention recently. Utilizing this approach, I view a line
as an ordered set of [image] elements; that is, a line is comprised of a
finite ordered set of elements. Each element of a line is treated
as a directed line (a vector with direction and magnitude).
Here's what I am trying to define: with such a definition
of a line, it should be possible to create mappings between lines
to form fairly abstract ideas of similarity between lines. Since
objects are viewed as a particular arrangement of lines, this analysis
would suffice in identifying objects as being alike. Some examples,
the two lines possessing the most similarities (i.e.,
MAX ( LINE1 .intersection. LINE2 ) ) may be one criterion of comparison.
I'm looking for any references you might have on this area.
This INCLUDES:
1) physiology/biology/neuroanatomy articals dealing with
functional mappings from the ganglion to any level of
cortical processing.
2) fuzzy set theory. This includes ordered set theory and
any and all applications of set theory to pattern recognition.
3) any other pertinent references
I would greatly appreciate any references you might provide.
After a week or two, I will compile the references and put them
on the AI-LIST so that we all can use them.
Viva la effort!
Philip Kahn
[My correspondence with Philip indicates that he is already familiar
with much of the recent literature on optic flow. He has found little,
however, on the subject of pattern recognition in vector fields. Can
anyone help? -- KIL]
------------------------------
Date: Sun, 13 Nov 1983 22:42 EST
From: Montalvo%MIT-OZ@MIT-MC.ARPA
Subject: Rational Psychology [and Reply]
Date: 28 Sep 83 10:32:35-PDT (Wed)
To: AIList at MIT-MC
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Rational Psychology [and Reply]
... Is psychology rational?
Someone said that all sciences are rational, a moot point, but not that
relevant unless one wishes to consider Psychology a science. I do not.
This does not mean that psychologists are in any way inferior to chemists
or to REAL scientists like those who study physics. But I do think there
....
----GaryFostel----
This is an old submission, but having just read it I felt compelled to
reply. I happen to be a Computer Scientist, but I think
Psychologists, especially Experimental Psychologists, are better
scientists than the average Computer "Scientist". At least they have
been trained in the scientific method, a skill most Computer
Scientists lack. Just because Psychologist, by and large, cannot defend
themselves on this list is no reason to make idle attacks with only
very superficial knowledge on the subject.
Fanya Montalvo
------------------------------
Date: Sun 13 Nov 83 13:14:06-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: just a reminder...
Artificial intelligence promises to alter the world in enormous ways during our
lifetime; I believe it's crucial for all of us to look forward to the effects
our our work, both individually and collectively, to make sure that it will be
to the benefit of all peoples in the world.
It seems to be tiresome to people to remind them of the incredible effect that
AI will have in our lifetimes, yet the profound mature of the changes to the
world made by a small group of researchers makes it crucial that we don't treat
our efforts casually. For example, the military applications of AI will dwarf
that of the atomic bomb, but even more important is the fact that the atomic
bomb is a primarily military device, while AI will impact the world as much (if
not more) in non-military domains.
Physics in the early part of this century was at the cutting edge of knowledge,
similar to the current place of AI. The culmination of their work in the atomic
bomb changed their field immensely and irrevocably; even on a personal level,
researchers in physics found their lives greatly impacted, often shattered.
Many of the top researchers left the field.
During our lifetimes I think we will see a similar transformation, with the
"fun and games" of these heady years turning into a deadly seriousness, I think
we will also see top researchers leaving the field, once we start to see some
of our effects on the world. It is imperative for all workers in this field to
formulate and share a moral outlook on what we do, and hope to do, to the
world.
I would suggest we have, at the minimum, a three part responsibility. First, we
must make ourselves aware of the human impact of our work, both short and long
term. Second, we must use this knowledge to guide the course of our research,
both individually and collectively, rather than simply flowing into whatever
area the grants are flowing into. Third and most importantly, we must be
spokespeople and consciences to the world, forcing others to be informed of
what we are doing and its effects. Researches who still cling to "value-free"
science should not be working in AI.
I will suggest a few areas we should be thinking about:
- Use of AI for offensive military use vs. legitimate defense needs. While the
line is vague, a good offense is surely not always the best defense.
- Will the work cause a centralization of power, or cause a decentralization of
power? Building massive centers of power in this age increases the risk of
humans dominated by machine.
- Is the work offering tools to extend the grasp of humans, or tools to control
humans?
- Will people have access to the information generated by the work, or will the
benefits of information access be restricted to a few?
Finally, will the work add insights into ourselves a human beings, or will it
simply feed our drives, reflecting our base nature back at ourselves? In the
movie "Tron" an actor says "Our spirit remains in each and every program we
wrote"; what IS our spirit?
David
------------------------------
Date: 8 Nov 1983 09:44:28-PST
From: Elaine Marsh <marsh@NRL-AIC>
Subject: AI Seminar Schedule
[I am passing this along because it is the first mention of this seminar
series in AIList and will give interested readers the chance to sign up
for the mailing list. I will not continue to carry these seminar notices
because they do not include abstracts. -- KIL]
U.S. Navy Center for Applied Research
in Artificial Intelligence
Naval Research Laboratory - Code 7510
Washington, DC 20375
WEEKLY SEMINAR SERIES
14 Nov. 1983 Dr. Jagdish Chandra, Director
Mathematical Sciences Division
Army Research Office, Durham, NC
"Mathematical Sciences Activities Relating
to AI and Its Applications at the Army
Research Office"
21 Nov. 1983 Professor Laveen Kanal
Department of Computer Science
University of Maryland, College Park, MD
"New Insights into Relationships among
Heuristic Search, Dynamic Programming,
and Branch & Bound Procedures"
28 Nov. 1983 Dr. William Gale
Bell Labs
Murray Hill, NJ
"An Expert System for Regression
Analysis: Applying A.I. Ideas in
Statistics"
5 Dec. 1983 Professor Ronald Cole
Department of Computer Science
Carnegie-Mellon University, Pittsburgh, PA
"What's New in Speech Recognition?"
12 Dec. 1983 Professor Robert Haralick
Department of Electrical Engineering
Virginia Polytechnic Institute, Blacksburg, VA
"Application of AI Techniques to the
Interpretation of LANDSAT Scenes over
Mountainous Areas"
Our meeting are usually held Monday mornings at 10:00 a.m. in the
Conference Room of the Navy Center for Applied Research in Artificial
Intelligence (Bldg. 256) located on Bolling Air Force Base, off I-295,
in the South East quadrant of Washington, DC.
Coffee will be available starting at 9:45 a.m.
If you would like to speak, or be added to our mailing list, or would
just like more information contact Elaine Marsh at marsh@nrl-aic
[(202)767-2382].
------------------------------
Date: Mon 7 Nov 83 15:20:15-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral
[Reprinted from the SU-SCORE bboard.]
Ph.D. Oral
COMPILING LOGIC SPECIFICATIONS FOR PROGRAMMING ENVIRONMENTS
November 16, 1983
2:30 p.m., Location to be announced
Stephen J. Westfold
A major problem in building large programming systems is in keeping track of
the numerous details concerning consistency relations between objects in the
domain of the system. The approach taken in this thesis is to encourage the
user to specify a system using very-high-level, well-factored logic
descriptions of the domain, and have the system compile these into efficient
procedures that automatically maintain the relations described. The approach
is demonstrated by using it in the programming environment of the CHI
Knowledge-based Programming system. Its uses include describing and
implementing the database manager, the dataflow analyzer, the project
management component and the system's compiler itself. It is particularly
convenient for developing knowledge representation schemes, for example for
such things as property inheritance and automatic maintenance of inverse
property links.
The problem description using logic assertions is treated as a program such as
in PROLOG except that there is a separation of the assertions that describe the
problem from assertions that describe how they are to be used. This
factorization allows the use of more general logical forms than Horn clauses as
well as encouraging the user to think separately about the problem and the
implementation. The use of logic assertions is specified at a level natural to
the user, describing implementation issues such as whether relations are stored
or computed, that some assertions should be used to compute a certain function,
that others should be treated as constraints to maintain the consistency of
several interdependent stored relations, and whether assertions should be used
at compile- or execution-time.
Compilation consists of using assertions to instantiate particular procedural
rule schemas, each one of which corresponds to a specialized deduction, and
then compiling the resulting rules to LISP. The rule language is a convenient
intermediate between the logic assertion language and the implementation
language in that it has both a logic interpretation and a well-defined
procedural interpretation. Most of the optimization is done at the logic
level.
------------------------------
Date: Fri 11 Nov 83 09:56:17-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral
[Reprinted from the SU-SCORE bboard.]
Ph.D. Oral
Tuesday, Nov. 15, 1983, 2:30 p.m.
Bldg. 170 (history corner), conference room
A DEDUCTIVE MODEL OF BELIEF
Kurt Konolige
Reasoning about knowledge and belief of computer and human agents is assuming
increasing importance in Artificial Intelligence systems in the areas of
natural language understanding, planning, and knowledge representation in
general. Current formal models of belief that form the basis for most of these
systems are derivatives of possible- world semantics for belief. However,,
this model suffers from epistemological and heuristic inadequacies.
Epistemologically, it assumes that agents know all the consequences of their
belief. This assumption is clearly inaccurate, because it doesn't take into
account resource limitations on an agent's reasoning ability. For example, if
an agent knows the rules of chess, it then follows in the possible- world model
that he knows whether white has a winning strategy or not. On the heuristic
side, proposed mechanical deduction procedures have been first-order
axiomatizations of the possible-world belief.
A more natural model of belief is a deductions model: an agent has a set of
initial beliefs about the world in some internal language, and a deduction
process for deriving some (but not necessarily all) logical consequences of
these beliefs. Within this model, it is possible to account for resource
limitations of an agent's deduction process; for example, one can model a
situation in which an agent knows the rules of chess but does not have the
computational resources to search the complete game tree before making a move.
This thesis is an investigation of Gentzen-type formalization of the deductive
model of belief. Several important original results are proven. Among these
are soundness and completeness theorems for a deductive belief logic; a
corespondence result that shows the possible- worlds model is a special case of
the deduction model; and a model analog ot Herbrand's Theorem for the belief
logic. Several other topics of knowledge and belief are explored in the thesis
from the viewpoint of the deduction model, including a theory of introspection
about self-beliefs, and a theory of circumscriptive ignorance, in which facts
an agent doesn't know are formalized by limiting or circumscribing the
information available to him. Here it is!
------------------------------
End of AIList Digest
********************
∂14-Nov-83 1831 LAWS@SRI-AI.ARPA AIList Digest V1 #96
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Nov 83 18:29:11 PST
Date: Monday, November 14, 1983 8:48AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #96
To: AIList@SRI-AI
AIList Digest Monday, 14 Nov 1983 Volume 1 : Issue 96
Today's Topics:
Theory - Parallel Systems,
Looping Problem in Literature,
Intelligence
----------------------------------------------------------------------
Date: 8 Nov 83 23:03:04-PST (Tue)
From: pur-ee!uiucdcs!uokvax!andree @ Ucb-Vax
Subject: Re: Infinite loops and Turing machines.. - (nf)
Article-I.D.: uiucdcs.3712
/***** uokvax:net.ai / umcp-cs!speaker / 9:41 pm Nov 1, 1983 */
Aha! I knew someone would come up with this one!
Consider that when we talk of simultaneous events... we speak of
simultaneous events that occur within one Turing machine state
and outside of the Turing machine itself. Can a one-tape
Turing machine read the input of 7 discrete sources at once?
A 7 tape machine with 7 heads could!
/* ---------- */
But I can do it with a one-tape, one-head turing machine. Let's assume
that each of your 7 discrete sources can always be represeted in n bits.
Thus, the total state of all seven sources can be represented in 7*n bits.
My one-tape turing machine has 2 ** (7*n) symbols, so it can handle your
7 sources, each possible state of all 7 being one symbol of input.
One of the things I did in an undergraduate theory course was show that
an n-symbol turing machine is no more powerful than a two-symbol turing
machine for any finite (countable?) n. You just loose speed.
<mike
------------------------------
Date: Friday, 11 November 1983, 14:54-EST
From: Carl Hewitt <HEWITT at MIT-AI>
Subject: parallel vs. sequential
An excellent treatise on how some parallel machines are more powerful
than all sequential machines can be found in Will Clinger's doctoral
dissertation "Foundations of Actor Semantics" which can be obtained by
sending $7 to
Publications Office
MIT Artificial Intelligence Laboratory
545 Technology Square
Cambridge, Mass. 02139
requesting Technical Report 633 dated May 1981.
------------------------------
Date: Fri 11 Nov 83 17:12:08-PST
From: Wilkins <WILKINS@SRI-AI.ARPA>
Subject: parallelism and turing machines
Regarding the "argument" that parallel algorithms cannot be run serially
because a Turing machine cannot react to things that happen faster than
the time it needs to change states:
clearly, you need to go back to whoever sold you the Turing machine
for this purpose and get a turbocharger for it.
Seriously, I second the motion to move towards more useful discussions.
------------------------------
Date: 9 Nov 83 19:28:21-PST (Wed)
From: ihnp4!cbosgd!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: the halting problem in history
Article-I.D.: uvacs.1048
If there were any 'subroutines' in the brain that could not
halt... I'm sure they would have been found and bred out of
the species long ago. I have yet to see anyone die from
an infinite loop. (umcp-cs.3451)
There is such. It is caused by seeing an object called the Zahir. One was
a Persian astrolabe, which was cast into the sea lest men forget the world.
Another was a certain tiger. Around 1900 it was a coin in Buenos Aires.
Details in "The Zahir", J.L.Borges.
------------------------------
Date: 8 Nov 83 16:38:29-PST (Tue)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!asa @ Ucb-Vax
Subject: Re: Inscrutable Intelligence
Article-I.D.: rayssd.233
The problem with a psychological definition of intelligence is in finding
some way to make it different from what animals do, and cover all of the
complex things that huumans can do. It used to be measured by written
test. This was grossly unfair, so visual tests were added. These tend to
be grossly unfair because of cultural bias. Dolphins can do very
"intelligent" things, based on types of "intelligent behavior". The best
definition might be based on the rate at which learning occurs, as some
have suggested, but that is also an oversimplification. The ability to
deduce cause and effect, and to predict effects is obviously also
important. My own feeling is that it has something to do with the ability
to build a model of yourself and modify yourself accordingly. It may
be that "I conceive" (not "I think"), or "I conceive and act", or "I
conceive of conceiving" may be as close as we can get.
------------------------------
Date: 8 Nov 83 23:02:53-PST (Tue)
From: pur-ee!uiucdcs!uokvax!rigney @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: uiucdcs.3711
Perhaps something on the order of "Intelligence enhances survivability
through modification of the environment" is in order. By modification
something other than the mere changes brought about by living is indicated
(i.e. Rise in CO2 levels, etc. doesn't count).
Thus, if Turtles were intelligent, they would kill the baby rabbits, but
they would also attempt to modify the highway to present less of a hazard.
Problems with this viewpoint:
1) It may be confusing Technology with Intelligence. Still, tool
making ability has always been a good sign.
2) Making the distinction between Intelligent modifications and
the effect of just being there. Since "conscious modification"
lands us in a bigger pit of worms than we're in now, perhaps a
distinction should be drawn between reactive behavior (reacting
and/or adapting to changes) and active behavior (initiating
changes). Initiative is therefore a factor.
3) Monkeys make tools(Antsticks), Dolphins don't. Is this an
indication of intelligence, or just a side-effect of Monkeys
having hands and Dolphins not? In other words, does Intelligence
go away if the organism doesn't have the means of modifying
its environment? Perhaps "potential" ability qualifies. Or
we shouldn't consider specific instances (Is a man trapped in
a desert still intelligent, even if he has no way to modify
his environment.)
Does this mean that if you had a computer with AI, and
stripped its peripherals, it would lose intelligence? Are
human autistics intelligent? Or are we only considering
species, and not representatives of species?
In the hopes that this has added fuel to the discussion,
Carl
..!ctvax!uokvax!rigney
..!duke!uok!uokvax!rigney
------------------------------
Date: 8 Nov 83 20:51:15-PST (Tue)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: RE:intelligence and adaptability - (nf)
Article-I.D.: uiucdcs.3746
Actually, SHRDLU had neither hand nor eye -- only simulations of them.
That's a far cry from the real thing.
------------------------------
Date: 9 Nov 83 16:20:10-PST (Wed)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: inscrutable intelligence
Article-I.D.: uvacs.1047
Regarding inscrutability of intelligence [sri-arpa.13363]:
Actually, it's typical that a discipline can't define its basic object of
study. Ever heard a satisfactory definition of mathematics (it's not just
the consequences of set theory) or philosophy.? What is physics?
Disciplines are distinguished from each other for historical and
methodological reasons. When they can define their subject precisely it is
because they have been superseded by the discipline that defines their
terms.
It's usually not important (or possible) to define e.g. intelligence
precisely. We know it in humans. This is where the IQ tests run into
trouble. AI seems to be about behavior in computers that would be called
intelligent in humans. Whether the machines are or are not intelligent
(or, for that matter, conscious) is of little interest and no import. In
this I guess I agree with Rorty [sri-arpa.13322]. Rorty is willing to
grant consciousness to thermostats if it's of any help.
(Best definition of formal mathematics I know: "The science where you don't
know what you're talking about or whether what you're saying is true".)
A. Colvin
mac@virginia
------------------------------
Date: 12 Nov 83 0:37:48-PST (Sat)
From: decvax!genrad!security!linus!utzoo!utcsstat!laura @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: utcsstat.1420
The other problem with the "turtles should be killing baby
rabbits" definition of intelligence is that it seems to imply that
killing (or at least surviving) is an indication of intelligence.
i would rather not believe this, unless there is compelling evidence
that the 2 are related. So far I have not seen the evidence.
Laura Creighton
utcsstat!laura
------------------------------
Date: 20 Nov 83 0:24:46-EST (Sun)
From: pur-ee!uiucdcs!trsvax!karl @ Ucb-Vax
Subject: Re: Slow Intelligence - (nf)
Article-I.D.: uiucdcs.3789
" .... I'm not at all sure that people's working definition
of intelligence has anything at all to do with either time
or survival. "
Glenn Reid
I'm not sure that people's working definition of intelligence has
anything at all to do with ANYTHING AT ALL. The quoted statement
implies that peoples' working definition of intelligence is different
- it is subjective to each individual. I would like to claim
that each individual's working definition of intelligence is sub-
ject to change also.
What we are working with here is conceptual.. not a tangible ob-
ject which we can spot at an instance. If the object is concep-
tual, and therefore subjective, then it seems that we can (and
probably will) change it's definition as our collective experi-
ences teach us differently.
Karl T. Braun
...ctvax!trsvax!karl
------------------------------
End of AIList Digest
********************
∂14-Nov-83 2241 GOLUB@SU-SCORE.ARPA Congratulations!
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Nov 83 22:41:28 PST
Date: Mon 14 Nov 83 22:40:51-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Congratulations!
To: su-bboards@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA
Congratulations to Harry Mairson for being awarded the Machtey award
for the best student paper at FOCS. GENE
-------
∂14-Nov-83 2244 GOLUB@SU-SCORE.ARPA meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Nov 83 22:44:29 PST
Date: Mon 14 Nov 83 22:42:11-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: meeting
To: faculty@SU-SCORE.ARPA
On Tuesday, Gordon Bower who is our deanlet, will be the guest for lunch.
GENE
-------
∂15-Nov-83 1453 @SRI-AI.ARPA:BrianSmith.PA@PARC-MAXC.ARPA Lisp As Language Course; Change of Plans
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 14:53:35 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Tue 15 Nov 83 09:36:05-PST
Date: Tue, 15 Nov 83 08:22 PST
From: BrianSmith.PA@PARC-MAXC.ARPA
Subject: Lisp As Language Course; Change of Plans
To: CSLI-Friends@SRI-AI.ARPA
cc: BrianSmith.PA@PARC-MAXC.ARPA
Some changes in plans:
1. I have (regretfully) rescheduled the "Lisp As Language" course until
spring quarter. This delay has been forced by uncertainties about
when the workstations will be delivered, coupled with a realistic
assessment of how much preparation will be needed to develop the
pedagogical environment on the workstation. I apologize to anyone
who was counting on its starting in January, but we need to do it
well, and I just don't think that can happen before April.
2. We will, however, make some arrangement during winter quarter to
teach people to use Interlisp-D on the 1108 workstations as soon as
they arrive. I.e., whereas the "Lisp As Language" course will be
fairly theoretical, we will also provide pratical instruction on how
to
write simple Interlisp programs on the 1108, how to use the
debugger,
etc. This may be in the form of a course, or small tutorial
sessions,
or some other arrangement.
If you would be interested in this second, "nuts and bolts" approach to
computation and to our LISP workstations, please send me a note. There
will clearly be many different levels of expectations, from people who
have never used LISP before, to people who are expert LISP programmers
but would like instruction in Interlisp-D and the 1108. We will do our
best to accomodate these various different needs, but it is clear that
the whole computational side of the CSLI community will have to rally to
this cause. Anyone with ideas about how we should do this, or with
suggestions as to who should teach, should definitely get in touch.
Also, I will be organizing a small working group, to meet during winter
quarter, to help prepare the spring course. The idea will be to work
through Sussman's book, and other standard CS material, to work out just
how to present it all under a general linguistic conception. We will
develop exercises, spell out definitions of various standard computer
science notions, etc. If desired, I can make this a small computer
science graduate seminar, or else arrange credit for any student who
would like to participate.
Brian
∂15-Nov-83 1506 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Nov. 17th
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 15:04:51 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
SRI-AI.ARPA mailer was obliged to send this message in 50-byte
individually Pushed segments because normal TCP stream transmission
timed out. This probably indicates a problem with the receiving TCP
or SMTP server. See your site's software support if you have any questions.
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Tue 15 Nov 83 13:19:34-PST
Date: Tue, 15 Nov 83 13:15 PST
From: desRivieres.PA@PARC-MAXC.ARPA
Subject: CSLI Activities for Thursday Nov. 17th
To: csli-friends@SRI-AI.ARPA
Reply-to: desRivieres.PA@PARC-MAXC.ARPA
CSLI SCHEDULE FOR THURSDAY, NOVEMBER 17, 1983
10:00 Research Seminar on Natural Language
Speaker: Stan Rosenschein (CSLI-SRI)
Title: "Issues in the Design of Artificial Agents that Use Language"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Jerry Hobbs
Paper for discussion: "The Second Naive Physics Manifesto"
by Patrick J. Hayes.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Mark Stickel (SRI)
Title: "A Nonclausal Connection-Graph
Resolution Theorem-Proving Program"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Charles Fillmore, Paul Kay,
and Mary Catherine O'Connor (UC Berkeley)
Title: "Idiomaticity and Regularity: the case of "let alone""
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available in
a lot located just off Campus Drive, across from the construction site.
∂15-Nov-83 1516 @SRI-AI.ARPA:BrianSmith.pa@PARC-MAXC.ARPA Lisp As Language Course, P.S.
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 15:15:56 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Tue 15 Nov 83 13:41:39-PST
Date: 15 Nov 83 13:39 PDT
From: BrianSmith.pa@PARC-MAXC.ARPA
Subject: Lisp As Language Course, P.S.
To: CSLI-Friends@SRI-AI.ARPA
cc: BrianSmith.pa@PARC-MAXC.ARPA
There have been questions about the "Lisp as Language" course,
especially regarding what level it will be aimed at, how much
computational background I will be assuming, etc.
I very strongly want NOT to assume any programming experience: this is
very definitely meant to be a first course in computer science. It has
always been my intent to aim it at linguists, philosophers, and other
"students of language" who have not been exposed to computer science
before. In fact the whole point is to make explicit the basic notions
of computer science, in a linguistically interesting way. So, please
feel welcome, even if you have never written a program in your life.
The course on which it will be most closely modelled, Gerry Sussman's
course at M.I.T., is taught to incoming freshman, and has no
pre-requisites. The students work hard, but they start from the
beginning. I expect the CSLI course to be fairly demanding, as well,
and I will assume a familiarity with formal systems. Nonetheless,
people with any familiarity with mathematics, formal grammars, or the
like, should have no problem.
It won't happen till April, but it seemed important to make this clear
now.
Brian
∂15-Nov-83 1531 KJB@SRI-AI.ARPA Advisory Panel
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 15:30:50 PST
Date: Tue 15 Nov 83 15:07:14-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Advisory Panel
To: csli-folks@SRI-AI.ARPA
A few notes on the Advisory Panel meeting later this week:
1. For serious personal reasons, Burstal will not be able to come
at this time. This is very unfortunate, since his help in Area C
is crucial. He may be able to visit later in the fall, though.
2. To have something definite planned, we have decided to have the
Panel meet with area B principals from 1:30-2, area C from 2-2:30,
area D from 2:30-3 and area A from 3-3:30. If they want to arrange it
some other way, we will let you know.
3. The Panel is to help us, not to review us for SDF. Thus you should
feel free to talk to them about things that you would like to see
happen that you may not not have felt free to tell Betsy or me, or
others on the Executive Committee about. But you should also balance
this with what you see that is happening the way it should. We don't
want them to go away thinking that there are nothing but problems.
-------
∂15-Nov-83 1641 KJB@SRI-AI.ARPA p.s. on Advisory Panel
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 16:41:09 PST
Date: Tue 15 Nov 83 16:35:34-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: p.s. on Advisory Panel
To: csli-folks@SRI-AI.ARPA
4. Don't forget the wine and cheese from 3:30 to 6. This is a chance for
everyone to talk to the panle members individually. We should spread out and
use the whole lower floor.
-------
∂15-Nov-83 1717 @SRI-AI.ARPA:Nuyens.pa@PARC-MAXC.ARPA Re: Lisp As Language Course; Change of Plans
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 17:17:35 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
SRI-AI.ARPA mailer was obliged to send this message in 50-byte
individually Pushed segments because normal TCP stream transmission
timed out. This probably indicates a problem with the receiving TCP
or SMTP server. See your site's software support if you have any questions.
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Tue 15 Nov 83 17:15:45-PST
Date: 15 Nov 83 15:22 PDT
From: Nuyens.pa@PARC-MAXC.ARPA
Subject: Re: Lisp As Language Course; Change of Plans
In-reply-to: BrianSmith.PA's message of Tue, 15 Nov 83 08:22 PST
To: BrianSmith.PA@PARC-MAXC.ARPA
cc: CSLI-Friends@SRI-AI.ARPA
ReceivedHi Brian,
I would be interested in being a member of the working group for the
Lisp as Language course. On the other matter, while I don't have
extensive 1108 experience I can probably be useful in some capacity for
your mass CSLI education efforts.
Greg
∂15-Nov-83 1838 LAWS@SRI-AI.ARPA AIList Digest V1 #98
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 18:37:28 PST
Date: Tuesday, November 15, 1983 10:21AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #98
To: AIList@SRI-AI
AIList Digest Tuesday, 15 Nov 1983 Volume 1 : Issue 98
Today's Topics:
Intelligence - Definitions & Metadiscussion,
Looping Problem,
Architecture - Parallelism vs. Novel Architecture,
Pattern Recognition - Optic Flow & Forced Matching,
Ethics & AI,
Review - Biography of Turing
----------------------------------------------------------------------
Date: 14 Nov 1983 15:03-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest V1 #96
An intelligent race is one with a winner, not one that keeps on
rehashing the first 5 yards till nobody wants to watch it anymore.
FC
------------------------------
Date: 14 Nov 83 10:22:29-PST (Mon)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: Intelligence and Killing
Article-I.D.: ncsu.2396
Someone wondered if there was evidence that intelligence was related to
the killing off of other animals. Presumably that person is prepared to
refute the apparant similtaneous claims of man as the most intelligent
and the most deadly animal. Personally, I might vote dolphins as more
intelligent, but I bet they do their share of killing too. They eat things.
----GaryFostel----
------------------------------
Date: 14 Nov 83 14:01:55-PST (Mon)
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Behavioristic definition of intelligence
Article-I.D.: ihuxv.584
What is the purpose of knowing whether something is
intelligent? Or has a soul? Or has consciousness?
I think one of the reasons is that it makes it easier to
deal with it. If a creature is understood to be a human
being, we all know something about how to behave toward it.
And if a machine exhibits intelligence, the quintessential
quality of human beings, we also will know what to do.
One of the things that this implies is that we really should
not worry too much about whether a machine is intelligent
until one gets here. The definition of it will be in part
determined by how we behave toward it. Right now, I don't feel
very confused about how to act in the presence of a computer
running an AI program.
Tom Portegys, Bell Labs IH, ihuxv!portegys
------------------------------
Date: 12 Nov 83 19:38:02-PST (Sat)
From: decvax!decwrl!flairvax!kissell @ Ucb-Vax
Subject: Re: the halting problem in history
Article-I.D.: flairvax.267
"...If there were any subroutines in the brain that did not halt..."
It seems to me that there are likely large numbers of subroutines in the
brain that aren't *supposed* to halt. Like breathing. Nothing wrong with
that; the brain is not a metaphor for a single-instruction-stream
processor. I've often suspected, though, that some pathological states,
depression, obsession, addiction, etcetera can be modeled as infinite
loops "executed" by a portion of the brain, and thus why "shock" treatments
sometimes have beneficial effects on depression; a brutal "reset" of the
whole "system".
------------------------------
Date: Tue, 15 Nov 83 07:58 PST
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: parallelism vs. novel architecture
There has been a lot of discussion in this group recently about the
role of parallelism in artificial intelligence. If I'm not mistaken,
this discussion began in response to a message I sent in, reviving a
discussion of a year ago in Human-Nets. My original message raised
the question of whether there might exist some crucial, hidden,
architectural mechanism, analogous to DNA in genetics, which would
greatly clarify the workings of intelligence. Recent discussions
have centered on the role of parallelism alone. I think this misses
the point. While parallelism can certainly speed things up, it is
not the kind of fundamental departure from past practices which I
had in mind. Perhaps a better example would be Turing's and von
Neumann's concept of the stored-program computer, replacing earlier
attempts at hard-wired computers. This was a fundamental break-
through, without which nothing like today's computers could be
practical. Perhaps true intelligence, of the biological sort,
requires some structural mechanism which has yet to be imagined.
While it's true that a serial Turing machine can do anything in
principle, it may be thoroughly impractical to program it to be
truly intelligent, both because of problems of speed and because of
the basic awkwardness of the architecture. What is hopelessly
cumbersome in this architecture may be trivial in the right one. I
know this sounds pretty vague, but I don't think it's meaningless.
------------------------------
Date: Mon 14 Nov 83 17:59:07-PST
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Re: AIList Digest V1 #97
There is a paper by Kruskal on multi-dimensional scaling that might be of
interest to the user interested in vision processing. I'm not too clear on
what he's doing, so this could be off-base.
Dave Foulser
------------------------------
Date: Mon 14 Nov 83 22:24:45-MST
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Pattern Matchers
Thanks for the replies about loop detection; some food for thought
in there...
My next puzzle is about pattern matchers. Has anyone looked carefully
at the notion of a "non-failing" pattern matcher? By that I mean one
that never or almost never rejects things as non-matching. Consider
a database of assertions (or whatever) and the matcher as a search
function which takes a pattern as argument. If something in the db
matches the pattern, then it is returned. At this point, the caller
can either accept or reject the item from the db. If rejected, the
matcher would be called again, to find something else matching, and
so forth. So far nothing unusual. The matcher will eventually
signal utter failure, and that there is nothing satisfactory in the
database. My idea is to have the matcher constructed in such a way
that it will return things until the database is entirely scanned, even
if the given pattern is a very simple and rigid one. In other words,
the matcher never gives up - it will always try to find the most
tenuous excuse to return a match.
Applications I have in mind: NLP for garbled and/or incomplete sentences,
and creative thinking (what does a snake with a tail in its mouth
have to do with benzene? talk about tenuous connections!).
The idea seems related to fuzzy logic (an area I am sadly ignorant
of), but other than that, there seems to be no work on the idea
(perhaps it's a stupid one?). There seem to be two main problems -
organizing the database in such a way that the matcher can easily
progress from exact matches to extremely remote ones (can almost
talk about a metric space of assertions!), and setting up the
matcher's caller so as not to thrash too badly (example: a parser
may have trouble deciding whether a sentence is grammatically
incorrect or a word's misspelling looks like another word,
if the word analyzer has a nonfailing matcher).
Does anybody know anything about this? Is there a fatal flaw
somewhere?
Stan Shebs
BTW, a frame-based system can be characterized as a semantic net
(if you're willing to mung concepts!), and a semantic net can
be mapped into an undirected graph, which *is* a metric space.
------------------------------
Date: 14 November 1983 1359-PST (Monday)
From: crummer at AEROSPACE (Charlie Crummer)
Subject: Ethics and AI Research
Dave Rogers brought up the subject of ethics in AI research. I agree with him
that we must continually evaluate the projects we are asked to work on.
Unfortunately, like the example he gave of physicists working on the bombs,
we will not always know what the government has in mind for our work. It may
be valid to indict the workers on the Manhattan project because they really
did have an idea what was going on but the very early researchers in the
field of radioactivity probably did not know how their discoveries would be
used.
The application of morality must go beyond passively choosing not to
work on certain projects. We must become actively involved in the
application by our government of the ideas we create. Once an idea or
physical effect is discovered it can never be undiscovered. If I
choose not to work on a project (which I definitely would if I thought
it immoral) that may not make much difference. Someone else will
always be waiting to pick up the work. It is sort of like preventing
rape by refusing to rape anyone.
--Charlie
------------------------------
Date: 14 Nov 83 1306 PST
From: Russell Greiner <RDG@SU-AI>
Subject: Biography of Turing
[Reprinted from the SU-SCORE bboard.]
n055 1247 09 Nov 83
BC-BOOK-REVIEW (UNDATED)
By CHRISTOPHER LEHMANN-HAUPT
c. 1983 N.Y. Times News Service
ALAN TURING: The Enigma. By Andrew Hodges. 587 pages.
Illustrated. Simon & Schuster. $22.50.
He is remembered variously as the British cryptologist whose
so-called ''Enigma'' machine helped to decipher Germany's top-secret
World War II code; as the difficult man who both pioneered and
impeded the advance of England's computer industry; and as the
inventor of a theoretical automaton sometimes called the ''Turing
(Editors: umlaut over the u) Machine,'' the umlaut being, according
to a glossary published in 1953, ''an unearned and undesirable
addition, due, presumably, to an impression that anything so
incomprehensible must be Teutonic.''
But this passionately exhaustive biography by Andrew Hodges, an
English mathematician, brings Alan Turing very much back to life and
offers a less forbidding impression. Look at any of the many verbal
snapshots that Hodges offers us in his book - Turing as an
eccentrically unruly child who could keep neither his buttons aligned
nor the ink in his pen, and who answered his father when asked if he
would be good, ''Yes, but sometimes I shall forget!''; or Turing as
an intense young man with a breathless high-pitched voice and a
hiccuppy laugh - and it is difficult to think of him as a dark
umlauted enigma.
Yet the mind of the man was an awesome force. By the time he was 24
years old, in 1936, he had conceived as a mathematical abstraction
his computing machine and completed the paper ''Computable Numbers,''
which offered it to the world. Thereafter, Hodges points out, his
waves of inspiration seemed to flow in five-year intervals - the
Naval Enigma in 1940, the design for his Automatic Computing Engine
(ACE) in 1945, a theory of structural evolution, or morphogenesis, in
1950. In 1951, he was elected a Fellow of the Royal Society. He was
not yet 40.
But the next half-decade interval did not bring further revelation.
In February 1952, he was arrested, tried, convicted and given a
probationary sentence for ''Gross Indecency contrary to Section 11 of
the Criminal Law Amendment Act 1885,'' or the practice of male
homosexuality, a ''tendency'' he had never denied and in recent years
had admitted quite openly. On June 7, 1954, he was found dead in his
home near Manchester, a bitten, presumably cyanide-laced apple in his
hand.
Yet he had not been despondent over his legal problems. He was not
in disgrace or financial difficulty. He had plans and ideas; his work
was going well. His devoted mother - about whom he had of late been
having surprisingly (to him) hostile dreams as the result of a
Jungian psychoanalysis - insisted that his death was the accident she
had long feared he would suffer from working with dangerous
chemicals. The enigma of Alan Mathison Turing began to grow.
Andrew Hodges is good at explaining Turing's difficult ideas,
particularly the evolution of his theoretical computer and the
function of his Enigma machines. He is adept at showing us the
originality of Turing's mind, especially the passion for truth (even
when it damaged his career) and the insistence on bridging the worlds
of the theoretical and practical. The only sections of the biography
that grow tedious are those that describe the debates over artificial
intelligence - or maybe it's the world's resistance to artificial
intelligence that is tedious. Turing's position was straightforward
enough: ''The original question, 'Can machines think?' I believe to
be too meaningless to deserve discussion. Nevertheless I believe that
at the end of the century the use of words and general educated
opinion will have altered so much that one will be able to speak of
machines thinking without expecting to be contradicted.''
On the matter of Turing's suicide, Hodges concedes its
incomprehensibility, but then announces with sudden melodrama: ''The
board was ready for an end game different from that of Lewis
Carroll's, in which Alice captured the Red Queen, and awoke from
nightmare. In real life, the Red Queen had escaped to feasting and
fun in Moscow. The White Queen would be saved, and Alan Turing
sacrificed.''
What does Hodges mean by his portentous reference to cold-war
politics? Was Alan Turing a murdered spy? Was he a spy? Was he the
victim of some sort of double-cross? No, he was none of the above:
the author is merely speculating that as the cold war heated up, it
must have become extremely dangerous to be a homosexual in possession
of state secrets. Hodges is passionate on the subject of the
precariousness of being homosexual; it was partly his participation
in the ''gay liberation'' movement that got him interested in Alan
Turing in the first place.
Indeed, one has to suspect Hodges of an overidentification with Alan
Turing, for he goes on at far too great length on Turing's
existential vulnerability. Still, word by word and sentence by
sentence, he can be exceedingly eloquent on his subject. ''He had
clung to the simple amidst the distracting and frightening complexity
of the world,'' the author writes of Turing's affinity for the
concrete.
''Yet he was not a narrow man,'' Hodges continues. ''Mrs. Turing was
right in saying, as she did, that he died while working on a
dangerous experiment. It was the experiment called LIFE - a subject
largely inducing as much fear and embarrassment for the official
scientific world as for her. He had not only thought freely, as best
he could, but had eaten of two forbidden fruits, those of the world
and of the flesh. They violently disagreed with each other, and in
that disagreement lay the final unsolvable problem.''
------------------------------
End of AIList Digest
********************
∂15-Nov-83 2041 @SRI-AI.ARPA:sag%Psych.#Pup@SU-SCORE.ARPA Thursday Indian Dinner
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 20:40:52 PST
Received: from SU-SCORE.ARPA by SRI-AI.ARPA with TCP; Tue 15 Nov 83 20:39:43-PST
Received: from Psych by Score with Pup; Tue 15 Nov 83 20:37:27-PST
Date: Tuesday, 15 Nov 1983 20:35-PST
To: csli-friends@sri-ai at Score
Subject: Thursday Indian Dinner
From: Ivan Sag <sag@Su-psych>
I am arranging a dinner with this thursday's (Nov. 17)
colloquium speakers: Chuck Fillmore and Paul Kay. I will make
a 7:00 reservation at:
SUE'S KITCHEN
1061 E. El Camino Real
Sunnyvale
(408) 296-6522
Let me stick my neck out and assert that this is an excellent, but
moderately priced restaurant specializing in South Indian (Andhra
and Madras) cuisine (Masala dosas, and the like) as well as more
familiar North Indian curries.
THERE WILL BE A SIGN-UP LIST ON THE DESK IN THE LOBBY OF VENTURA HALL
ON THURSDAY. IF YOU ARE INTERESTED IN THIS OUTING, PLEASE SIGN UP BY
TEA TIME.
Those who are willing to give rides to the carless should so indicate
when signing up; and the carless should then also make themselves known.
Sue's Kitchen is located in a small shopping center (Henderson Center)
located on the southeast corner of the intersection of Henderson Avenue
and El Camino. [make the simplifying assumption that El Camino runs north/
south]. Henderson is the second traffic light north of Lawrence Expressway
and the second traffic light south of Wolfe Road. Taking 101 or 280
south to Lawrence is faster than going all the way on El Camino.
Cheers,
Ivan Sag
∂15-Nov-83 2114 @SRI-AI.ARPA:vardi@diablo Knowledge Seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83 21:14:25 PST
Received: from Diablo (SU-HNV.ARPA) by SRI-AI.ARPA with TCP; Tue 15 Nov 83 21:08:09-PST
Date: Tue, 15 Nov 83 21:06 PST
From: Moshe Vardi <vardi@diablo>
Subject: Knowledge Seminar
To: csli-friends@sri-ai
A public mailing list has been established for the Knowledge Seminar.
Following this message, csli-friends@sri-ai will be removed from that list.
If you want to be on the mailing list, you should add yourself to it.
To do that send to mailer@su-hnv the message "add knowledge".
To remove yourself for the list send to mailer@su-hnv the message
"delete knowledge".
Moshe
∂15-Nov-83 2200 Winograd.PA@PARC-MAXC.ARPA AI and the military
Received: from PARC-MAXC by SU-AI with TCP/SMTP; 15 Nov 83 22:00:13 PST
Date: 15 Nov 83 21:58 PST
From: Winograd.PA@PARC-MAXC.ARPA
Subject: AI and the military
To: antiwar↑.PA@PARC-MAXC.ARPA, funding@sail.ARPA
Reply-To: Winograd.PA@PARC-MAXC.ARPA
---------------------------
Date: Mon, 14 Nov 83 14:05 PST
From: Stefik.PA
Subject: Strategic Computing Blurb
To: KSA↑
cc: Gadol
Received over the network . . .
STRATEGIC COMPUTING PLAN ANNOUNCED; REVOLUTIONARY ADVANCES
IN MACHINE
INTELLIGENCE TECHNOLOGY TO MEET CRITICAL DEFENSE NEEDS
Washington, D.C. (7 Nov. 1983) - - Revolutionary advances in the way
computers will be applied to tomorrow's national defense needs were
described in a comprehensive "Strategic Computing" plan announced today by
the
Defense Advanced Research Projects Agency (DARPA).
DARPA's plan encompasses the development and application of machine
intelligence technology to critical defense problems. The program
calls for transcending today's computer capabilities by a "quantum
jump." The powerful computers to be developed under the plan will be
driven by "expert systems" that mimic the thinking and reasoning
processes of humans. The machines will be equipped with sensory and
communication modules enabling them to hear, talk, see and act on
information and data they develop or receive. This new technology as
it emerges during the coming decade will have unprecedented
capabilities and promises to greatly increase our national security.
Computers are already widely employed in defense, and are relied on to help
hold the field against larger forces. But current computers have inflexible
program logic, and are limited in their ability to adapt to unanticipated
enemy actions in the field. This problem is heightened by the increasing
pace and complexity of modern warfare. The new DARPA program will
confront
this challenge by producing adaptive, intelligent computers specifically aimed
at critical military applications.
Three initial applications are identified in the DARPA plan. These include
autonomous vehicles (unmanned aircraft, submersibles, and land vehicles),
expert associates, and large-scale battle management systems.
In contrast with current guided missiles and munitions, the new autonomous
vehicles will be capable of complex, far-ranging reconnaissance and attack
missions, and will exhibit highly adaptive forms of terminal
homing.
A land vehicle described in the plan will be able to navigate cross-country
from one location to another, planning its route from digital terrain data,
and updating its plan as its vision and image understanding systems sense and
resolve ambiguities between observed and stored terrain data. Its expert
local-navigation system will devise schemes to insure concealment and avoid
obstacles as the vehicle pursues its mission objectives.
A pilot's expert associate will be developed that can interact via
speech communications and function as a "mechanized co-pilot". This
system will enable a pilot to off-load lower-level instrument
monitoring, control, and diagnostic functions, freeing him to focus on
high-priority decisions and actions. The associate will be trainable
and personalizable to the requirements of specific missions and the
methods of an individual pilot. It will heighten pilots' capabilities
to act effectively and decisively in high stress combat
situations.
The machine intelligence technology will also be applied in a
carrier battle-group battle management system. This system will aid in
the information fusion, option generation, decision making, and event
monitoring by the teams of people responsible for managing such
large-scale, fast-moving combat situations.
The DARPA program will achieve its technical objectives and produce
machine
intelligence technology by jointly exploiting a wide range of recent
scientific advances in artificial intelligence, computer architecture, and
microelectronics.
Recent advances in artificial intelligence enable the codification in sets
of computer "rules" of the thinking processes that people use to reason, plan,
and make decisions. For example, a detailed codification of the thought
processes and heuristics by which a person finds his way through an unfamiliar
city using a map and visual landmarks might be employed as the basis of an
experimental expert system for local navigation (for the autonomous land
vehicle). Such expert systems are already being successfully employed in
medical diagnosis, experiment planning in genetics, mineral exploration, and
other areas of complex human expertise.
Expert systems can often be decomposed into separate segments that
can be processed concurrently. For example, one might search for a
result along many paths in parallel, taking the first satisfactory
solution and then proceeding on to other tasks. In many expert
systems rules simply "lay in wait" - firing only if a specific
situation arises. Different parts of such a system could be operated
concurrently to watch for the individual contexts in which their rules
are to fire.
DARPA plans to develop special computers that will exploit
opportunities for concurrent processing of expert systems. This
approach promises a large increase in the power and intelligence of
such systems. Using "coarse-mesh" machines consisting of multiple
microprocessors, an increase in power of a factor of one hundred over
current systems will be achievable within a few years. By creating
special VLSI chip designs containing multiple "fine-mesh" processors,
by populating entire silicon wafers with hundreds of such chips, and
by using high-bandwidth optoelectronic cables to interconnect groups
of wafers, increases of three or four orders of magnitude in symbol
processing and rule-firing rates will be achieved as the research
program matures. While the program will rely heavily on silicon
microelectronics for high-density processing structures, extensive use
will also be made of gallium arsenide technology for high-rate signal
processing, optoelectronics, and for military applications requiring
low-power dissipation and high-immunity to radiation.
The expert system technology will enable the DARPA computers to "think
smarter." The special architectures for concurrency and the faster, denser
VLSI microelectronics will enable them to "think harder and faster." The
combination of these approaches promises to be potent indeed.
But machines that mimic thinking are not enough by themselves. They must
be
provided with sensory devices that mimic the functions of eyes and ears. They
must have the ability to see their environment, to hear and understand human
language, and to respond in kind.
Huge computer processing rates will be required to provide effective machine
vision and machine understanding of natural language. Recent advances in the
architecture of special processor arrays promise to provide the required
rates. By patterning many small special processors together on a silicon
chip, computer scientists can now produce simple forms of machine vision in a
manner analogous to that used in the retina of the eye. Instead of each image
pixel being sequentially processed as when using a standard von Neumann
computer, the new processor arrays allow thousands of pixels to be processed
simultaneously. Each image pixel is processed by just a few transistor
switches located close together in a processor cell that communicates over
short distances with neighboring cells. The number of transistors required to
process each pixel can be perhaps one one-thousandth of that employed in a von
Neumann machine, and the short communications distances lead to much faster
processing rates per pixel. All these effects multiply the factor of thousands
gained by concurrency. The DARPA program plans to provide special
vision subsystems that have rates as high as one trillion von Neumann
equivalent operations per second as the program matures in the late 1980's.
The DARPA Strategic Computing plan calls for the rapid evolution of a set
of prototype intelligent computers, and their experimental application
in military test-bed environments. The planned activities will lead to a
series of demonstrations of increasingly sophisticated machine intelligence
technology in the selected applications as the program progresses.
DARPA will utilize an extensive infrastructure of computers, computer
networks, rapid system prototyping services, and silicon foundries to support
these technology explorations. This same infrastructure will also enable the
sharing and propagation of successful results among program participants. As
experimental intelligent machines are created in the program, some will be
added to the computer network resources - further enhancing the capabilities
of the research infrastructure.
The Strategic Computing program will be coordinated closely with
Under Secretary of Defense Research and Engineering, the Military
Services, and other Defense Agencies. A number of advisory panels and
working groups will also be constituted to assure inter-agency
coordination and maintain a dialogue within the scientific
community.
The program calls for a cooperative effort among American industry,
universities, other research institutions, and government. Communication
is critical in the management of the program since many of the contibutors
will be widely dispersed throughout the U.S. Heavy use will be made of the
Defense Department's ARPANET computer network to link participants
and to establish a productive research environment.
Ms. Lynn Conway, Assistant Director for Strategic Computing in
DARPA's Information Processing Techniques Office, will manage the new
program. Initial program funding is set at $50M in fiscal 1984. It is
proposed at $95M in FY85, and estimated at $600M over the first five years
of the program.
The successful achievement of the objectives of the Strategic Computing
program will lead to the deployment of a new generation of military systems
containing machine intelligence technology. These systems promise to provide
the United States with important new methods of defense against both massed
forces and unconventional threats in the future - methods that can raise the
threshold and decrease the likelihood of major conflict.
-------
------------------------------------------------------------
------------------------------------------------------------
∂15-Nov-83 2319 PKARP@SU-SCORE.ARPA Fall Potluck
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Nov 83 23:19:11 PST
Date: Tue 15 Nov 83 23:18:18-PST
From: Peter Karp <PKARP@SU-SCORE.ARPA>
Subject: Fall Potluck
To: faculty@SU-SCORE.ARPA
cc: theimer@SU-SCORE.ARPA
Marvin Theimer and I are the social committee this year, and we
are beginning to plan the fall CSD potluck dinner.
Potlucks are usually held at a faculty members house and provide a fun
and informal setting for faculty, staff and students in the department
to interact.
We would like to hold this falls potluck on December 3rd since this is
probably the only reasonable time left to hold it this quarter. Thus,
I am soliciting volunteers who would be willing to hold the potluck
at their house. Note that set-up and clean-up crews of volunteers are
organized beforehand so a lot of effort on your part should not be
required. Also note that judging from past experience we should expect
O(70) people at a potluck, which gives a rough idea of the size house
we need. But people do not seem to mind sitting down on floors,
stairways, backyards, etc.
If you would like to volunteer to host the potluck please send mail
to Marvin and I. Thank you,
Peter
-------
∂16-Nov-83 1032 @MIT-MC:RICKL%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Nov 83 10:32:19 PST
Date: Wed 16 Nov 83 13:28:32-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: limitations of logic
To: hewitt%MIT-OZ@MIT-MC.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
You gave an interesting and certainly provocative lecture last week on
fundamental limitations of logic programming. Particularly striking were
your Inconsistency Principle:
Any axiomatization of the expert knowledge of any non-trivial
domain is inconsistent.
and accompanying Perpetual Inconsistency Corollary:
If any inconsistency is found and removed from any such
axiomatization of any non-trivial domain, the resulting
system will be inconsistent.
from which you argued that *any* such axiomatization is, and will always
be, formally meaningless in the Tarskian sense, because it corresponds to
no possible world. This was in support of your later advocacy of the
superiority of the actor model of computation for certain purposes.
I am curious how strongly you intend these same points to apply to
science, one large part of which is an attempt to axiomatize the
knowledge of experts (scientists) about the natural world.
Is axiomatization of scientific expert knowledge an exception to your
principles? Or is all of scientific axiomatization now, and always,
formally meaningless? Or would you substitute an actor-like model of
science (in which perhaps scientists are engaged in building and
refining actor-like models of their domain entities, and the entities'
behavior)? None of the above?
-=*=- rick
-------
∂16-Nov-83 1256 @SRI-AI.ARPA:withgott.pa@PARC-MAXC.ARPA Re: Transportation for Fodor and Partee
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Nov 83 12:55:10 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Wed 16 Nov 83 12:55:32-PST
Date: 16 Nov 83 12:53 PDT
From: withgott.pa@PARC-MAXC.ARPA
Subject: Re: Transportation for Fodor and Partee
In-reply-to: BMACKEN@SRI-AI.ARPA's message of Thu, 10 Nov 83 13:15:34
PST
To: BMACKEN@SRI-AI.ARPA
cc: csli-folks@SRI-AI.ARPA
Betsy,
Do you have a volunteer to pick up Barbara P. and Fodor yet?
If not, we've got a sub-compact, but it's do-able.
Meg
∂16-Nov-83 1351 @MIT-MC:Batali@MIT-OZ limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Nov 83 13:50:59 PST
Date: Wednesday, 16 November 1983, 13:58-EST
From: John Batali <Batali at MIT-OZ>
Subject: limitations of logic
To: RICKL at MIT-OZ, hewitt at MIT-OZ
Cc: phil-sci at MIT-OZ
In-reply-to: The message of 16 Nov 83 13:28-EST from RICKL at MIT-OZ
I took Carl's point to be a refutation of the Tarskian notion of meaning
as having anything to do with "real" meaning. The argument goes:
(1) Any axiomitization of any non-trivial domain will be
formally inconsistent.
(2) Tarskian semantics assigns no meaning to inconsistent
logical theories.
(3) But there are many non-trivial domains in which statements
have very rich meanings (eg science).
(4) Therefore Tarskian semantics fails to capture our notion of
meaning.
This seems to be a reasonable argument, standing or falling on the truth
of proposition 1. As a practical matter, it seems to be true, the
computational complexity alone of determining and enforcing logical
consistency seems prohibitive. One could argue against 1, saying that
IN PRINCIPLE one could construct a consistent logical theory for, say,
science. (NOT by adding axioms to an inconsistent theory, of course.)
But such an argument alone would simply be a refutation of the point and
would need support. The support FOR proposition 1 is inductive
generalization from virtually all known non-mathematical theories.
∂16-Nov-83 1454 KJB@SRI-AI.ARPA Friday afternoon
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Nov 83 14:54:42 PST
Date: Wed 16 Nov 83 14:53:55-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Friday afternoon
To: csli-folks@SRI-AI.ARPA
The plan for Friday that I sent out yesterday is only tentative. We will
talk to the Panel Friday a.m. and see if they would prefer something
else, and then send out a message by 11 am Friday. Try to keep all
Friday pm free until you hear.
If we do it by areas, the idea is that principals should come to those
area meetings that interest them, where they feel they are doing some
work. We do not want to think of people as being in just one area.
-------
∂16-Nov-83 1522 @MIT-MC:DAM%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Nov 83 15:22:39 PST
Date: Wed, 16 Nov 1983 18:07 EST
Message-ID: <DAM.11968174735.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: phil-sci%MIT-OZ@MIT-MC.ARPA
cc: PHW%MIT-OZ@MIT-MC.ARPA, DUGHOF%MIT-OZ@MIT-MC.ARPA
Subject: limitations of logic
Date: Wednesday, 16 November 1983, 13:58-EST
From: John Batali <Batali>
I took Carl's point to be a refutation of the Tarskian notion of
meaning as having anything to do with "real" meaning. The argument
goes:
(1) Any axiomitization of any non-trivial domain will be
formally inconsistent.
(2) Tarskian semantics assigns no meaning to inconsistent
logical theories.
(3) But there are many non-trivial domains in which statements
have very rich meanings (eg science).
(4) Therefore Tarskian semantics fails to capture our notion
of meaning.
This seems to be a reasonable argument, standing or falling on the
truth of proposition 1.
I accept (1) but deny (2). It is individual statements, not
theories, which are given meaning by Tarskian semantics. A
"non-trivial axiomatization" will contain a large number of
independent premises (beliefs about the world). The conventional
wisdom (which I accept) is that when these beliefs are taken together
they form an inconsistent theory. However each belief or statement
when considered BY ITSELF is still given meaning under Tarskian model
theory. There is no constraint in Tarskian model theory which says
every statement can ONLY be considered in the context of all the other
believed statements. It is clear that we can factor out particular
statements and reason about their consequences individually (e.g. in
hypothetical or mathematical reasoning). My TMS is based on Tarskian
model theory and works just fine in the presence of all sorts of
contradictions.
David Mc
∂16-Nov-83 1638 @MIT-MC:Hewitt%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Nov 83 16:38:08 PST
Date: Wednesday, 16 November 1983, 19:35-EST
From: Carl Hewitt <Hewitt%MIT-OZ@MIT-MC.ARPA>
Subject: limitations of logic
To: RICKL%MIT-OZ@MIT-MC.ARPA
Cc: hewitt%MIT-OZ@MIT-MC.ARPA, phil-sci%MIT-OZ@MIT-MC.ARPA,
Hewitt%MIT-OZ@MIT-MC.ARPA
In-reply-to: The message of 16 Nov 83 13:28-EST from RICKL at MIT-AI
Return-path: <RICKL@MIT-OZ>
Date: Wed 16 Nov 83 13:28:32-EST
From: RICKL@MIT-OZ
Subject: limitations of logic
To: hewitt@MIT-OZ
cc: phil-sci@MIT-OZ
You gave an interesting and certainly provocative lecture last week on
fundamental limitations of logic programming. Particularly striking were
your Inconsistency Principle:
Any axiomatization of the expert knowledge of any non-trivial
domain is inconsistent.
and accompanying Perpetual Inconsistency Corollary:
If any inconsistency is found and removed from any such
axiomatization of any non-trivial domain, the resulting
system will be inconsistent.
from which you argued that *any* such axiomatization is, and will always
be, formally meaningless in the Tarskian sense, because it corresponds to
no possible world. This was in support of your later advocacy of the
superiority of the actor model of computation for certain purposes.
I am curious how strongly you intend these same points to apply to
science, one large part of which is an attempt to axiomatize the
knowledge of experts (scientists) about the natural world.
I believe that these points apply to the expert scientific knowledge in every established
field of science and engineering.
Is axiomatization of scientific expert knowledge an exception to your
principles? Or is all of scientific axiomatization now, and always,
formally meaningless?
Inconsistent axiomatizations are meaningless only from the point of view of
truth-theoretic (Tarski) semantics.
Or would you substitute an actor-like model of
science (in which perhaps scientists are engaged in building and
refining actor-like models of their domain entities, and the entities'
behavior)?
Bill Kornfeld and I have published a paper on some preliminary work in this direction titled
"The Scientific Community Metaphor" in the IEEE Transactions on Systems, Man, and
Cybernetics for January, 1981.
Cheers,
Carl
∂16-Nov-83 1654 @MIT-MC:Hewitt%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Nov 83 16:54:06 PST
Date: Wednesday, 16 November 1983, 19:46-EST
From: Carl Hewitt <Hewitt%MIT-OZ@MIT-MC.ARPA>
Subject: limitations of logic
To: DAM%MIT-OZ@MIT-MC.ARPA
Cc: phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA,
DUGHOF%MIT-OZ@MIT-MC.ARPA, Hewitt%MIT-OZ@MIT-MC.ARPA
In-reply-to: <DAM.11968174735.BABYL@MIT-OZ>
Return-path: <DAM@MIT-OZ>
Date: Wed, 16 Nov 1983 18:07 EST
Message-ID: <DAM.11968174735.BABYL@MIT-OZ>
From: DAM@MIT-OZ
To: phil-sci@MIT-OZ
cc: PHW@MIT-OZ, DUGHOF@MIT-OZ
Subject: limitations of logic
Date: Wednesday, 16 November 1983, 13:58-EST
From: John Batali <Batali>
I took Carl's point to be a refutation of the Tarskian notion of
meaning as having anything to do with "real" meaning. The argument
goes:
(1) Any axiomitization of any non-trivial domain will be
formally inconsistent.
(2) Tarskian semantics assigns no meaning to inconsistent
logical theories.
(3) But there are many non-trivial domains in which statements
have very rich meanings (eg science).
(4) Therefore Tarskian semantics fails to capture our notion
of meaning.
This seems to be a reasonable argument, standing or falling on the
truth of proposition 1.
I accept (1) but deny (2). It is individual statements, not
theories, which are given meaning by Tarskian semantics. A
"non-trivial axiomatization" will contain a large number of
independent premises (beliefs about the world). The conventional
wisdom (which I accept) is that when these beliefs are taken together
they form an inconsistent theory. However each belief or statement
when considered BY ITSELF is still given meaning under Tarskian model
theory.
I doubt that the Tarskian models of "individual" statements are going to provide very much
meaning. ANY theory of expert knowledge of a field can be made into an
"individual" statement by simply conjoining all of the axioms of the theory together. Of
course the resulting "individual" statement will be inconsistent.
There is no constraint in Tarskian model theory which says
every statement can ONLY be considered in the context of all the other
believed statements. It is clear that we can factor out particular
statements and reason about their consequences individually (e.g. in
hypothetical or mathematical reasoning).
My intuition is that the above activity can sometimes be useful. It would
be nice to have some examples.
Cheers,
Carl
∂16-Nov-83 1734 PATASHNIK@SU-SCORE.ARPA towards a more perfect department
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Nov 83 17:34:29 PST
Date: Wed 16 Nov 83 17:27:19-PST
From: Student Bureaucrats <PATASHNIK@SU-SCORE.ARPA>
Subject: towards a more perfect department
To: students@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA, secretaries@SU-SCORE.ARPA,
research-associates@SU-SCORE.ARPA
cc: bureaucrat@SU-SCORE.ARPA
Reply-To: bureaucrat@score
Several people have complained that there isn't enough interaction in
our department, and this will be exacerbated by parts of it leaving
Jacks. To help alleviate this, we are thinking of arranging a very
informal lunch one or two days a week at which students, faculty, and
staff (not necessarily in that order) get together and discuss
whatever is on their minds. If you think you might attend such a
lunch, even if only occasionally, please tell us which day(s) would be
best for you. If there is sufficient response we will set this up.
--Oren and Yoni, student bureaucrats
-------
∂16-Nov-83 1906 LAWS@SRI-AI.ARPA AIList Digest V1 #99
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Nov 83 19:05:49 PST
Date: Wednesday, November 16, 1983 2:25PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #99
To: AIList@SRI-AI
AIList Digest Thursday, 17 Nov 1983 Volume 1 : Issue 99
Today's Topics:
AI Literature - Comtex,
Review - Abacus,
Artificial Humanity,
Conference - SPIE Call for Papers,
Seminar - CRITTER for Critiquing Circuit Designs,
Military AI - DARPA Plans (long message)
----------------------------------------------------------------------
Date: Wed 16 Nov 83 10:14:02-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Comtex
The Comtex microfiche series seems to be alive and well, contrary
to a rumor printed in an early AIList issue. The ad they sent me
offers the Stanford and MIT AI memoranda (over $2,000 each set), and
says that the Purdue PRIP [pattern recognition and image processing]
technical reports will be next. Also forthcoming are the SRI and
Carnegie-Mellon AI reports.
-- Ken Laws
------------------------------
Date: Wed 16 Nov 83 10:31:26-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Abacus
I have the first issue of Abacus, the new "soft" computer science
magazine edited by Anthony Ralston. It contains a very nice survey or
introduction to computer graphics for digital filmmaking and an
interesting exploration of how the first electronic digital computer
came to be. There is also a superficial article about computer vision
which fails to answer its title question, "Why Computers Can't See
(Yet)". [It is possibly that I'm being overly harsh since this is my
own area of expertise. My feeling, however, is that the question
cannot be answered by just pointing out that vision is difficult and
that we have dozens of different approaches, none of which works in
more than specialized cases. An adequate answer requires a guess at
how it is that the human vision system can work in all cases, and why
we have not been able to duplicate it.]
The magazine also offers various computer-related departments,
notably those covering book reviews, the law, personal computing,
puzzles, and politics. Humorous anecdotes are solicited for
filler material, a la Reader's Digest. There is no AI-related
column at present.
The magazine has a "padded" feel, particularly since every ad save
one is by Springer-Verlag, the publisher. They even ran out of
things to advertise and so repeated several full-page ads. No doubt
this is a new-issue problem and will quickly disappear. I wish
them well.
-- Ken Laws
------------------------------
Date: 16 Nov 1983 10:21:32 EST (Wednesday)
From: Mark S. Day <mday@bbnccj>
Subject: Artificial Humanity
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Behavioristic definition of intelligence
What is the purpose of knowing whether something is
intelligent? Or has a soul? Or has consciousness?
I think one of the reasons is that it makes it easier to
deal with it. If a creature is understood to be a human
being, we all know something about how to behave toward it.
And if a machine exhibits intelligence, the quintessential
quality of human beings, we also will know what to do.
Without wishing to flame or start a pointless philosophical
discussion, I do not consider intelligence to be the quintessential
quality of human beings. Nor do I expect to behave in the same way
towards an artificially intelligent program as I would towards a
person. Turing tests etc. notwithstanding, I think there is a
distinction between "artificial intelligence" and "artificial
humanity," and that by and large people are not striving to create
"artificial humanity."
------------------------------
Date: Wed 16 Nov 83 09:30:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Artificial Humanity
I attended a Stanford lecture by Doug Lenat on Tuesday. He mentioned
three interesting bugs that developed in EURISKO, a self-monitoring
and self-modifying program.
One turned up when EURISKO erroneously claimed to have discovered a
new type of flip-flop. The problem was traced to an array indexing
error. EURISKO, realizing that it had never in its entire history
had a bounds error, had deleted the bounds-checking code. The first
bounds error occurred soon after.
Another bug cropped up in the "credit assignment" rule base. EURISKO
was claiming that a particular rule had been responsible for discovering
a great many other interesting rules. It turned out that the gist of
the rule was "If the system discovers something interesting, attach my
name as the discoverer."
The third bug became evident when EURISKO halted at 4:00 one morning
waiting for an answer to a question. The system was supposed to know
that questions were OK when a person was around, but not at night with
no people at hand. People are represented in its knowledge base in the
same manner as any other object. EURISKO wanted (i.e., had as a goal)
to ask a question. It realized that the reason it could not was that
no object in its current environment had the "person" attribute. It
therefore declared itself to be a "person", and proceeded to ask the
question.
Doug says that it was rather difficult to explain to the system why
these were not reasonable things to do.
-- Ken Laws
------------------------------
Date: Wed 16 Nov 83 10:09:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: SPIE Call for Papers
SPIE has put out a call for papers for its Technical Symposium
East '84 in Arlington, April 29 - May 4. One of the 10 subtopics
is Applications of AI, particularly image understanding, expert
systems, autonomous navigation, intelligent systems, computer
vision, knowledge-based systems, contextual scene analysis, and
robotics.
Abstracts are due Nov. 21, manuscripts by April 2. For more info,
contact
SPIE Technical Program Committee
P.O. Box 10
Bellingham, Washington 98227-0010
(206) 676-3290, Technical Program Dept.
Telex 46-7053
-- Ken Laws
------------------------------
Date: 15 Nov 83 14:19:54 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: An III talk this Thursday...
[Reprinted from the RUTGERS bboard.]
Title: CRITTER - A System for 'Critiquing' Circuits
Speaker: Van Kelly
Date: Thursday, November 17,1983, 1:30-2:30 PM
Location: Hill Center, Seventh floor lounge
Van kelly, a Ph.D. student in our department, will describe a
computer system, CRITTER, for 'critiquing' digital circuit designs.
This informal talk based on his current thesis research. Here is an
abstract of the talk:
CRITTER is an exploratory prototype design aid for comprehensive
"critiquing" of digital circuit designs. While originally intended for
verifying a circuit's functional correctness and timing safety, it can
also be used to estimate design robustness, sensitivity to device
parameters, and (to some extent) testability. CRITTER has been built
using Artificial Intelligence ("Expert Systems") technology and its
reasoning is guided by an extensible collection of electronic knowledge
derived from human experts. Also, a new non-procedural representation
for both the real-time behavior of circuits and circuit specifications
has led to a streamlined circuit modeling formalism based on ordinary
mathematical function composition. A version of CRITTER has been
tested on circuits with complexities of up to a dozen TTL SSI/MSI
packages. A more powerful version is being adapted for use in an
automated VLSI design environment.
------------------------------
Date: 16 Nov 83 12:58:07 PST (Wednesday)
From: John Larson <JLarson.PA@PARC.ARPA>
Subject: AI and the military (long message)
Received over the network . . .
STRATEGIC COMPUTING PLAN ANNOUNCED; REVOLUTIONARY ADVANCES
IN MACHINE INTELLIGENCE TECHNOLOGY TO MEET CRITICAL DEFENSE NEEDS
Washington, D.C. (7 Nov. 1983) - - Revolutionary advances in the way
computers will be applied to tomorrow's national defense needs were
described in a comprehensive "Strategic Computing" plan announced
today by the Defense Advanced Research Projects Agency (DARPA).
DARPA's plan encompasses the development and application of machine
intelligence technology to critical defense problems. The program
calls for transcending today's computer capabilities by a "quantum
jump." The powerful computers to be developed under the plan will be
driven by "expert systems" that mimic the thinking and reasoning
processes of humans. The machines will be equipped with sensory and
communication modules enabling them to hear, talk, see and act on
information and data they develop or receive. This new technology as
it emerges during the coming decade will have unprecedented
capabilities and promises to greatly increase our national security.
Computers are already widely employed in defense, and are relied on
to help hold the field against larger forces. But current computers
have inflexible program logic, and are limited in their ability to
adapt to unanticipated enemy actions in the field. This problem is
heightened by the increasing pace and complexity of modern warfare.
The new DARPA program will confront this challenge by producing
adaptive, intelligent computers specifically aimed at critical
military applications.
Three initial applications are identified in the DARPA plan. These
include autonomous vehicles (unmanned aircraft, submersibles, and land
vehicles), expert associates, and large-scale battle management
systems.
In contrast with current guided missiles and munitions, the new
autonomous vehicles will be capable of complex, far-ranging
reconnaissance and attack missions, and will exhibit highly adaptive
forms of terminal homing.
A land vehicle described in the plan will be able to navigate
cross-country from one location to another, planning its route from
digital terrain data, and updating its plan as its vision and image
understanding systems sense and resolve ambiguities between observed
and stored terrain data. Its expert local-navigation system will
devise schemes to insure concealment and avoid obstacles as the
vehicle pursues its mission objectives.
A pilot's expert associate will be developed that can interact via
speech communications and function as a "mechanized co-pilot". This
system will enable a pilot to off-load lower-level instrument
monitoring, control, and diagnostic functions, freeing him to focus on
high-priority decisions and actions. The associate will be trainable
and personalizable to the requirements of specific missions and the
methods of an individual pilot. It will heighten pilots' capabilities
to act effectively and decisively in high stress combat situations.
The machine intelligence technology will also be applied in a
carrier battle-group battle management system. This system will aid in
the information fusion, option generation, decision making, and event
monitoring by the teams of people responsible for managing such
large-scale, fast-moving combat situations.
The DARPA program will achieve its technical objectives and produce
machine intelligence technology by jointly exploiting a wide range of
recent scientific advances in artificial intelligence, computer
architecture, and microelectronics.
Recent advances in artificial intelligence enable the codification
in sets of computer "rules" of the thinking processes that people use
to reason, plan, and make decisions. For example, a detailed
codification of the thought processes and heuristics by which a person
finds his way through an unfamiliar city using a map and visual
landmarks might be employed as the basis of an experimental expert
system for local navigation (for the autonomous land vehicle). Such
expert systems are already being successfully employed in medical
diagnosis, experiment planning in genetics, mineral exploration, and
other areas of complex human expertise.
Expert systems can often be decomposed into separate segments that
can be processed concurrently. For example, one might search for a
result along many paths in parallel, taking the first satisfactory
solution and then proceeding on to other tasks. In many expert
systems rules simply "lay in wait" - firing only if a specific
situation arises. Different parts of such a system could be operated
concurrently to watch for the individual contexts in which their rules
are to fire.
DARPA plans to develop special computers that will exploit
opportunities for concurrent processing of expert systems. This
approach promises a large increase in the power and intelligence of
such systems. Using "coarse-mesh" machines consisting of multiple
microprocessors, an increase in power of a factor of one hundred over
current systems will be achievable within a few years. By creating
special VLSI chip designs containing multiple "fine-mesh" processors,
by populating entire silicon wafers with hundreds of such chips, and
by using high-bandwidth optoelectronic cables to interconnect groups
of wafers, increases of three or four orders of magnitude in symbol
processing and rule-firing rates will be achieved as the research
program matures. While the program will rely heavily on silicon
microelectronics for high-density processing structures, extensive use
will also be made of gallium arsenide technology for high-rate signal
processing, optoelectronics, and for military applications requiring
low-power dissipation and high-immunity to radiation.
The expert system technology will enable the DARPA computers to
"think smarter." The special architectures for concurrency and the
faster, denser VLSI microelectronics will enable them to "think harder
and faster." The combination of these approaches promises to be
potent indeed.
But machines that mimic thinking are not enough by themselves. They
must be provided with sensory devices that mimic the functions of eyes
and ears. They must have the ability to see their environment, to hear
and understand human language, and to respond in kind.
Huge computer processing rates will be required to provide effective
machine vision and machine understanding of natural language. Recent
advances in the architecture of special processor arrays promise to
provide the required rates. By patterning many small special
processors together on a silicon chip, computer scientists can now
produce simple forms of machine vision in a manner analogous to that
used in the retina of the eye. Instead of each image pixel being
sequentially processed as when using a standard von Neumann computer,
the new processor arrays allow thousands of pixels to be processed
simultaneously. Each image pixel is processed by just a few transistor
switches located close together in a processor cell that communicates
over short distances with neighboring cells. The number of
transistors required to process each pixel can be perhaps one
one-thousandth of that employed in a von Neumann machine, and the
short communications distances lead to much faster processing rates
per pixel. All these effects multiply the factor of thousands gained
by concurrency. The DARPA program plans to provide special vision
subsystems that have rates as high as one trillion von Neumann
equivalent operations per second as the program matures in the late
1980's.
The DARPA Strategic Computing plan calls for the rapid evolution of
a set of prototype intelligent computers, and their experimental
application in military test-bed environments. The planned activities
will lead to a series of demonstrations of increasingly sophisticated
machine intelligence technology in the selected applications as the
program progresses.
DARPA will utilize an extensive infrastructure of computers,
computer networks, rapid system prototyping services, and silicon
foundries to support these technology explorations. This same
infrastructure will also enable the sharing and propagation of
successful results among program participants. As experimental
intelligent machines are created in the program, some will be added to
the computer network resources - further enhancing the capabilities of
the research infrastructure.
The Strategic Computing program will be coordinated closely with
Under Secretary of Defense Research and Engineering, the Military
Services, and other Defense Agencies. A number of advisory panels and
working groups will also be constituted to assure inter-agency
coordination and maintain a dialogue within the scientific community.
The program calls for a cooperative effort among American industry,
universities, other research institutions, and government.
Communication is critical in the management of the program since many
of the contibutors will be widely dispersed throughout the U.S. Heavy
use will be made of the Defense Department's ARPANET computer network
to link participants and to establish a productive research
environment.
Ms. Lynn Conway, Assistant Director for Strategic Computing in
DARPA's Information Processing Techniques Office, will manage the new
program. Initial program funding is set at $50M in fiscal 1984. It is
proposed at $95M in FY85, and estimated at $600M over the first five
years of the program.
The successful achievement of the objectives of the Strategic
Computing program will lead to the deployment of a new generation of
military systems containing machine intelligence technology. These
systems promise to provide the United States with important new
methods of defense against both massed forces and unconventional
threats in the future - methods that can raise the threshold and
decrease the likelihood of major conflict.
------------------------------
End of AIList Digest
********************
∂16-Nov-83 2059 JF@SU-SCORE.ARPA schedule
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Nov 83 20:59:01 PST
Date: Wed 16 Nov 83 20:54:14-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: schedule
To: bats@SU-SCORE.ARPA
cc: bats-coordinators: ;
the schedule for monday's BATS meeting is:
10-11: Dan Greene, PARC
11-12: Gabriel Kuper, Stanford
12-1: Lunch
1-2: Nick Pippenger, IBM
2-3: Andrey Goldberg, UCB
3-3:30: Coffee Break
3:30-4:30: Allen Goldberg, UCSC
The location, once again, is CERAS, room 122 ("the lgi"). Your local
coordinators have campus maps and parking instructions. Contact me if
you have any questions or problems.
Hope to see you all monday,
joan
(jf@su-score)
-------
∂16-Nov-83 2106 DKANERVA@SRI-AI.ARPA Newsletter No. 9, November 17, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Nov 83 21:04:19 PST
Date: Wed 16 Nov 83 19:25:26-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter No. 9, November 17, 1983
To: csli-friends@SRI-AI.ARPA
CSLI Newsletter
November 17, 1983 * * * Number 9
CSLI ADVISORY PANEL VISIT STARTS TODAY
We want to welcome the members of the CSLI Advisory Panel and,
at the same time, encourage CSLI folks to make the best of this chance
to talk with Panel members about CSLI and about areas of common
interest. The extended tea on Thursday afternoon (Nov. 17) will
provide a good opportunity for this. On Friday afternoon (Nov. 18),
Panel members will meet with the various projects, the arrangements
for which are to be made Friday morning.
Present at Thursday's CSLI activities will be Panel members
George Miller, Nils Nilsson, and Bob Ritchie. Jerry Fodor and Barbara
Partee will be arriving Thursday evening. Rod Burstall has been
prevented by serious personal reasons from coming to CSLI this first
time, but he hopes to visit CSLI sometime soon.
* * * * * * *
CSLI STAFF
With the hiring of Frances Igoni as receptionist, the CSLI staff
in Ventura Hall is complete. In addition to her receptionist duties,
Frances will eventually be in charge of the equipment in Room 7, which
includes the Imagen and Diablo printers, the copy machine, and the
mailing machine. Frances's desk is in the lobby of Ventura Hall and
her phone number is 497-0628, the information number for CSLI.
The other staff members are Leslie Batema, secretary to Joan
Bresnan and Stanley Peters; Joyce Firstenberger, administrator for
CSLI and assistant director for administration for IMSSS; Dianne
Kanerva, editor; Sandy McConnel-Riggs, secretary to John Perry,
Barbara Grosz, and Brian Smith; Emma Pease, office assistant and
keeper of mailing lists; Bach-Hong Tran, administrative assistant and
staff coordinator; and Pat Wunderman, secretary to Jon Barwise and
Betsy Macken.
* * * * * * *
NEW CSLI-NEWSLETTER MAILBOX FOR CSLI NEWSLETTER ITEMS
To make it easier for people to submit items for the CSLI
newsletter, a new net-mail address CSLI-NEWSLETTER@SRI-AI has been
established. The newsletter material will be forwarded to Dianne
Kanerva or whoever is putting the newsletter together at that time.
As a further mailing convenience, from now on, all messages to
CSLI-REQUEST will also reach CSLI-REQUESTS, which is in the care of
Emma Pease.
* * * * * * *
! Page 2
* * * * * * *
CSLI SCHEDULE FOR THURSDAY, NOVEMBER 17, 1983
10:00 Research Seminar on Natural Language
Speaker: Stan Rosenschein (CSLI-SRI)
Title: "Issues in the Design of Artificial Agents
That Use Language"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Jerry Hobbs
Paper for discussion: "The Second Naive Physics Manifesto"
by Patrick J. Hayes.
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Mark Stickel (SRI)
Title: "A Nonclausal Connection-Graph
Resolution Theorem-Proving Program"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Charles Fillmore, Paul Kay, and
Mary Catherine O'Connor (UC Berkeley)
Title: "Idiomaticity and Regularity:
The Case of "Let Alone""
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available
in a lot just off Campus Drive, across from the construction site.
* * * * * * *
C1: SEMANTICS OF PROGRAMMING LANGUAGES GROUP
On Tuesday, November 15th, Carolyn Talcott from Stanford spoke to
the regular C1 group about a model of computation called RUM. Her
presentation continues next Tuesday (November 22nd, 09:30-11:30, at
Xerox PARC, room 1500), when she will show how RUM specifications
provide the basis for interpreters and compilers. The following week
(Nov. 29), Yannis Moschovakis (UCLA) will present a talk entitled
"Foundations of the Concept of Algorithm."
* * * * * * *
! Page 3
* * * * * * *
REMINDER: No CSLI activities next Thursday, November 24, because of
the Thanksgiving holiday.
* * * * * * *
TINLUNCH SCHEDULE
TINLunch will be held on each Thursday at Ventura Hall on the
Stanford University campus as a part of CSLI activities. Copies of
TINLunch papers will be at SRI in EJ251 and at Stanford University in
Ventura Hall.
November 17 Jerry Hobbs
November 24 THANKSGIVING
December 1 Paul Martin
December 8 John McCarthy
* * * * * * *
CSLI COLLOQUIUM SCHEDULE FOR DECEMBER
Thursdays, 4:15 p.m., Room G-19, Redwood Hall
Thursday, Dec. 1: "Selected Problems in Visible Language"
Charles Bigelow
Computer Science Department, Stanford University
Thursday, Dec. 8: "Deductive Program Synthesis Research"
Richard Waldinger
AI Center, SRI International
* * * * * * *
The Stanford Philosophy Department presents:
"ARISTOTLE: ESSENCE AND ACCIDENT"
Alan Code, U.C. Berkeley
Alan Code of the Berkeley Philosophy Department will be giving a
talk at 3:15 p.m., Friday, November 18, at Stanford in Room 92Q (the
Philosophy Department seminar room). The talk is entitled "Aristotle:
Essence and Accident" and will deal primarily with Aristotle's theory
of predication, his theory about the relationship between language and
the world. Anyone interested in the first stab at a realist semantics
might want to come. There will also be a reception for Code at 8:00
p.m. that evening at my house (106 Peter Coutts Circle), to which all
are welcome.
- John Etchemendy
* * * * * * *
! Page 4
* * * * * * *
KNOWLEDGE SEMINAR AT IBM, SAN JOSE
The Knowledge Seminar has been rescheduled to Friday, December 9,
10:00 a.m, so we can have the big auditorium at IBM. A public mailing
list has been established for the seminar. CSLI-FRIENDS@SRI-AI will
be removed from that list. If you want to be on the mailing list,
you should add yourself to it. To do that send to MAILER@SU-HNV the
message "add knowledge." To remove yourself for the list send to
MAILER@SU-HNV the message "delete knowledge."
- Moshe Vardi
* * * * * * *
WHY CONTEXT WON'T GO AWAY
On Tuesday, November 15, Jerry Hobbs from SRI International spoke
on "Context Dependence in Interpretation." It is a commonplace in AI
that the interpretation of utterances is thoroughly context-dependent.
A framework was presented for investigating the processes of discourse
interpretation that allows one to analyze the various influences of
context. In this framework, differences in context are reflected in
differences in the structure of the hearer's knowledge base, in what
knowledge he believes he shares with the speaker, and in his theory of
what is going on in the world. It was shown how each of these factors
can affect aspects of the intepretation of an utterance, for example,
how a definite reference is resolved.
NEXT MEETING: Tuesday, November 22, 3:15 p.m., Ventura Hall
"Indexicals: A Communication Model"
Julius Moravcsik
Philosophy Department
Stanford University
Abstract: Semantic interpretation is often a matter of pragmatic
context and will be illustrated in this talk by universal
propositions. Next, indexicals as references to objects versus stages
will be examined. Finally, indexicals and the understanding versus
expressive power dichotomy will be discussed.
* * * * * * *
TALKWARE SEMINAR - CS 377
No meeting November 23
Date: November 30
Speaker: Amy Lansky (Stanford/SRI)
Topic: GEM: A Methodology for Specifying Concurrent Systems
Time: 2:15 - 4
Place: 380Y (Math Corner)
* * * * * * *
! Page 5
* * * * * * *
LISP AS LANGUAGE COURSE: CHANGE OF PLANS
Some changes in plans:
1. I have (regretfully) rescheduled the "Lisp As Language" course
until spring quarter. This delay has been forced by uncertainties
about when the workstations will be delivered, coupled with a
realistic assessment of how much preparation will be needed
to develop the pedagogical environment on the workstation. I
apologize to anyone who was counting on its starting in January,
but we need to do it well, and I just don't think that can happen
before April.
2. We will, however, make some arrangement during winter quarter to
teach people to use Interlisp-D on the 1108 workstations as soon
as they arrive. That is, whereas the "Lisp As Language" course
will be fairly theoretical, we will also provide practical
instruction on how to write simple Interlisp programs on the 1108,
how to use the debugger, etc. This may be in the form of a
course, or small tutorial sessions, or some other arrangement.
If you would be interested in this second, "nuts and bolts"
approach to computation and to our LISP workstations, please send me
a note. There will clearly be many different levels of expectations,
from people who have never used LISP before, to people who are expert
LISP programmers but would like instruction in Interlisp-D and the
1108. We will do our best to accommodate these various needs, but
it is clear that the whole computational side of the CSLI community
will have to rally to this cause. Anyone with ideas about how we
should do this, or with suggestions as to who should teach, should
definitely get in touch.
Also, I will be organizing a small working group, to meet during
winter quarter, to help prepare the spring course. The idea will be
to work through Sussman's book, and other standard CS material, to
work out just how to present it all under a general linguistic
conception. We will develop exercises, spell out definitions of
various standard computer science notions, etc. If desired, I can
make this a small computer science graduate seminar, or else arrange
credit for any student who would like to participate.
I want NOT to assume any programming experience: This is
definitely meant to be a first course in computer science. It has
always been my intent to aim it at linguists, philosophers, and other
"students of language" who have not been exposed to computer science
before. The whole point is to make explicit the basic notions of
computer science, in a linguistically interesting way. So, please
feel welcome, even if you have never written a program in your life.
It won't happen until April, but it seemed important to make this
clear now.
- Brian Smith
* * * * * * *
! Page 6
* * * * * * *
MTC SEMINAR
Speaker: Lawrence Paulson, University of Cambridge
Title: "Verifying the Unification Algorithm in LCF"
Time: Wednesday, November 16, 12 noon
Place: Margaret Jacks Rm 352 (Stanford Computer Science Department)
Abstract:
Manna and Waldinger (1981) have outlined a substantial theory of
substitutions, establishing the Unification Algorithm. All their
proofs have been formalized in the interactive theorem-prover LCF,
using mainly structural induction and rewriting. The speaker will
present an overview of the problems and results of this project,
along with a detailed account of the LCF proof that substitution is
monotonic relative to the occurrence ordering.
Their theory is oriented towards Boyer and Moore's logic. LCF
accepted it with little change, though it proves theorems in Scott's
logic of continuous functions and fixed-points. Explicit reasoning
about totality was added everywhere (a nuisance), and the final
well-founded induction was reformulated as three nested structural
inductions. A simpler data structure for expressions was chosen, and
methods developed to express the abstract type for substitutions.
Widespread engineering improvements in the theorem-prover produced
the new Cambridge LCF as a descendant of Edinburgh LCF.
Some proofs require considerable user direction. A difficult
proof may result from a badly formulated theorem, the lack of suitable
lemmas, or weaknesses in LCF's automatic tools. The speaker will
discuss how to organize proofs.
Z. Manna and R. Waldinger. 1981.
Deductive Synthesis of the Unification Algorithm,
Science of Computer Programming 1, pages 5-48.
* * * * * * *
SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
On Wednesday, November 16, Yoram Moses of Stanford spoke on
"A Formal Treatment of Ignorance."
Coming Events: November 23, Craig Smorynski
November 30, J.E. Fenstad
Time: Wednesday, November 16, 4:15-5:30 PM
Place: Stanford Mathematics Dept. Faculty Lounge (383-N)
* * * * * * *
-------
∂16-Nov-83 2133 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Nov 83 21:32:58 PST
Date: Thu 17 Nov 83 00:30:06-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
To: Batali%MIT-OZ@MIT-MC.ARPA
cc: hewitt%MIT-OZ@MIT-MC.ARPA, phil-sci%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "John Batali <Batali at MIT-OZ>" of Wed 16 Nov 83 16:44:28-EST
From: John Batali <Batali at MIT-OZ>
Subject: limitations of logic
In-reply-to: The message of 16 Nov 83 13:28-EST from RICKL at MIT-OZ
I took Carl's point to be a refutation of the Tarskian notion of meaning
as having anything to do with "real" meaning. The argument goes:
(1) Any axiomitization of any non-trivial domain will be
formally inconsistent.......
(4) Therefore Tarskian semantics fails to capture our notion of
meaning.
Actually the point is far stronger than just the failure of Tarskian semantics.
As is well known, from an inconsistent axiom set all wffs are deducible as
theorems. Therefore, any axiomatization of any non-trivial domain implies
everything, and it follows trivially that the set of consequences of any
axiomatization of any non-trivial domain is *formally* *equal* *to* the set
of consequences of *any* other axiomatization of *any* other non-trivial
domain. Thus no axiomatization of any non-trivial domain has any
greater power (excepting length of deductions) than any other.
This is absurd; the absurdity follows from *logic* without appeal to
Tarskian semantics, and seems to argue against the use of formal
logical axiomatization as a *primary* basis for understanding
non-trivial domains (such as science).
This seems to be a reasonable argument, standing or falling on the truth
of proposition 1. As a practical matter, it seems to be true....
I agree with you on both counts. (Does anyone on the net want to
argue that proposition 1 is not true for any non-trivial
non-mathematical domain, especially but not limited to science?)
-=*=- rick
-------
limitations of logic
I will argue that lots of axiomatizations (note spelling) are consistent.
So far as I know, the statement that they are inconsistent is entirely
unsupported. I assert, however, that axiomatizations of common sense
domains will require non-monotonic reasoning to be strong enough, and
this may be confused with inconsistency by the naive. Domains of
scientific physics will not require non-monotonic reasoning, because
they aspire to a completeness not realizable with common sense domains.
Hewitt, et. al., probably have a potentially useful intuition, but unless they
make the effort to make it as precise as possible, this potential will
not be realized. Of course, I didn't hear Hewitt's lecture, but I did
read the "Scientific Community Metaphor" paper and didn't agree with
anything. Indeed I didn't find the paper coherent, but then I don't
think metaphors should be offered as arguments; at most they are hints.
My remark about non-monotonic reasoning being needed for formalizing
common sense is similar to DAM's remark about the need for making
closed world assumptions and taking them back. Circumscription generalizes
the usual ways of doing this. Incidentally, I now realize that I would
have found it more interesting to debate about the usefulness of logic
with Carl rather than with Roger Schank, who changed his mind about whether
he was willing to debate this subject. Perhaps at M.I.T. some time if
Carl is willing.
∂16-Nov-83 2147 GOLUB@SU-SCORE.ARPA Search for Chairman
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Nov 83 21:45:38 PST
Date: Wed 16 Nov 83 21:44:21-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Search for Chairman
To: faculty@SU-SCORE.ARPA
As I announced at the lunch on Tuesday, Don Knuth has agreed to chair
the search committee for a chairperson. GENE
-------
∂16-Nov-83 2224 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Nov 83 22:24:46 PST
Date: Thu 17 Nov 83 01:22:17-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
To: DAM%MIT-OZ@MIT-MC.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA,
DUGHOF%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "DAM@MIT-OZ" of Wed 16 Nov 83 18:09:14-EST
Date: Wed, 16 Nov 1983 18:07 EST
From: DAM@MIT-OZ
Date: Wednesday, 16 November 1983, 13:58-EST
From: John Batali <Batali>
I took Carl's point to be....
(1) Any axiomitization of any non-trivial domain will be
formally inconsistent.
(2) Tarskian semantics assigns no meaning to inconsistent
logical theories......
I accept (1) but deny (2). It is individual statements, not
theories, which are given meaning by Tarskian semantics.
Unfortunately, as I pointed out in my reply to John earlier, from an
inconsistent axiom set *all* *possible* individual statements formally
follow as theorems.
However each belief or statement
when considered BY ITSELF is still given meaning under Tarskian model
theory.
If you take this tack, you must describe a procedure by which you can
decide whether to deduce A or ~A whenever you want to know whether A,
for your axiomatization will deduce both. All you are asserting here
is that, having deduced A or ~A (or more likely, both), you can tell
what that statement BY ITSELF means.
My TMS is based on Tarskian model theory and works just fine in
the presence of all sorts of contradictions.
Please correct me if I'm wrong (it was a while ago that I read your
TMS), but as I recall you did this by keeping track of the
dependencies of each deduction. I still don't see that this addresses
the problem of inconsistent deductions, other than being able to say
which axioms each was deduced from (which does help, agreed). Please
expand/correct/clarify how your TMS handles this?
-=*=- rick
-------
∂16-Nov-83 2256 @MIT-MC:KDF%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 16 Nov 83 22:55:15 PST
Date: Thu, 17 Nov 1983 01:51 EST
Message-ID: <KDF.11968259240.BABYL@MIT-OZ>
From: KDF%MIT-OZ@MIT-MC.ARPA
To: RICKL%MIT-OZ@MIT-MC.ARPA
Cc: DAM%MIT-OZ@MIT-MC.ARPA, DUGHOF%MIT-OZ@MIT-MC.ARPA,
phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
In-reply-to: Msg of Thu 17 Nov 83 01:22:17-EST from RICKL@MIT-OZ
What RICKL (and to some degree, Carl) is ignoring in the argument
that "since everything follows from a contradiction, logic is useless"
are non-simplistic procedural renderings of logic. THe image is that
an inference engine that hits a contradiction collapses in a quivering
heap (the Star Trek model) or continues merrily to deduce everything
until its plug is pulled.
In reality, contradictions are quite useful. Without the ability to
recognize contradictions, indirect proof is impossible. Dependency-
directed search explicitly relies on the ability to detect
contradictions. Furthermore, given that one is dealing with empirical
knowledge, analyzing anything requires making assumptions. These
assumptions can be wrong, and arriving at a contradiction lets you
find that out.
Another example: the closed world assumption. Carl has been giving it
bad press these days, but what he is really complaining about are
IMPLICIT closed-world assumptions made by AI hackers in constructing
theories and programs, NOT closed world assumptions that programs make
explicitly and are quite willing to retract when they discover they
are wrong. If you think you can get away without closed world
assumptions, recall the example of deciding to cross the street. You
don't know that a jet plane won't crash on you when you are in the
middle of the road, that the earth will not open and swallow you up,
and so forth, yet you decide it is safe to cross anyway. You may, when
falling into a manhole, decide to revise that theory. AI programs
have a hard time with that at present, and that's what I take Carl to
really be complaining about.
∂17-Nov-83 0058 @MIT-MC:JMC@SU-AI limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 00:58:32 PST
Date: 16 Nov 83 2342 PST
From: John McCarthy <JMC@SU-AI>
Subject: limitations of logic
To: phil-sci%oz@MIT-MC
I will argue that lots of axiomatizations (note spelling) are consistent.
So far as I know, the statement that they are inconsistent is entirely
unsupported. I assert, however, that axiomatizations of common sense
domains will require non-monotonic reasoning to be strong enough, and
this may be confused with inconsistency by the naive. Domains of
scientific physics will not require non-monotonic reasoning, because
they aspire to a completeness not realizable with common sense domains.
Hewitt, et. al., probably have a potentially useful intuition, but unless they
make the effort to make it as precise as possible, this potential will
not be realized. Of course, I didn't hear Hewitt's lecture, but I did
read the "Scientific Community Metaphor" paper and didn't agree with
anything. Indeed I didn't find the paper coherent, but then I don't
think metaphors should be offered as arguments; at most they are hints.
My remark about non-monotonic reasoning being needed for formalizing
common sense is similar to DAM's remark about the need for making
closed world assumptions and taking them back. Circumscription generalizes
the usual ways of doing this. Incidentally, I now realize that I would
have found it more interesting to debate about the usefulness of logic
with Carl rather than with Roger Schank, who changed his mind about whether
he was willing to debate this subject. Perhaps at M.I.T. some time if
Carl is willing.
∂17-Nov-83 0908 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 09:07:56 PST
Date: Thu 17 Nov 83 11:50:05-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
To: Hewitt%MIT-OZ@MIT-MC.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "Carl Hewitt <Hewitt at MIT-OZ>" of Wed 16 Nov 83 19:36:10-EST
Date: Wednesday, 16 November 1983, 19:35-EST
From: Carl Hewitt <Hewitt at MIT-OZ>
Date: Wed 16 Nov 83 13:28:32-EST
From: RICKL@MIT-OZ
Or would you substitute an actor-like model of
science (in which perhaps scientists are engaged in building and
refining actor-like models of their domain entities, and the entities'
behavior)?
Bill Kornfeld and I have published a paper on some preliminary work in this direction titled
"The Scientific Community Metaphor" in the IEEE Transactions on Systems, Man, and
Cybernetics for January, 1981.
I have already read this --- it is an attempt to identify some of the
sociology of science and show that similar mechanisms may have
applicability to parallel problem solving a.i. systems. It never
addresses the question of what scientists are doing if indeed they are not
constructing formal axiomatic systems --- i.e. what is (are) appropriate
representation(s) of scientific knowledge? This latter is the
direction I would hope this net discussion to take.
Since you were generally advocating the general superiority of actor
systems over formal axiomatizations, at least for much a.i. work, I
wondered if you also felt that they would be superior as a
representation of the process and product of science?
-=*=- rick
-------
∂17-Nov-83 0918 @MIT-MC:Hewitt%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 09:18:05 PST
Date: Thursday, 17 November 1983, 11:52-EST
From: Carl Hewitt <Hewitt%MIT-OZ@MIT-MC.ARPA>
Subject: limitations of logic
To: KDF%MIT-OZ@MIT-MC.ARPA
Cc: RICKL%MIT-OZ@MIT-MC.ARPA, DAM%MIT-OZ@MIT-MC.ARPA,
DUGHOF%MIT-OZ@MIT-MC.ARPA, phil-sci%MIT-OZ@MIT-MC.ARPA,
PHW%MIT-OZ@MIT-MC.ARPA, Hewitt%MIT-OZ@MIT-MC.ARPA
In-reply-to: The message of 17 Nov 83 01:51-EST from KDF at MIT-AI
Date: Thursday, 17 November 1983 01:51-EST
From: KDF at MIT-OZ
To: RICKL at MIT-OZ
cc: DAM at MIT-OZ, DUGHOF at MIT-OZ, phil-sci at MIT-OZ, PHW at MIT-OZ
Re: limitations of logic
What RICKL (and to some degree, Carl) is ignoring in the argument
that "since everything follows from a contradiction, logic is useless"
are non-simplistic procedural renderings of logic. THe image is that
an inference engine that hits a contradiction collapses in a quivering
heap (the Star Trek model) or continues merrily to deduce everything
until its plug is pulled.
Sounds like we may be in agreement here. In first order logic, ANYTHING and EVERYTHING
follows from a contradiction. So we need something beyond logic to deal with
empirical knowledge.
In reality, contradictions are quite useful. Without the ability to
recognize contradictions, indirect proof is impossible. Dependency-
directed search explicitly relies on the ability to detect
contradictions.
Good point. Logical systems enable us to trace contradictions which we find back to the
statements which are inconsistent.
Furthermore, given that one is dealing with empirical
knowledge, analyzing anything requires making assumptions. These
assumptions can be wrong, and arriving at a contradiction lets you
find that out.
Of course when a contradiction is found in a branch of empirical knowledge and removed
the resulting system will still be inconsistent. Sometimes scientists just decide to live
with some contradictions since it is not clear how to avoid throwing out the baby with
the bath water.
Another example: the closed world assumption.
What do you think the "closed world assumption" is?
Carl has been giving it bad press these days, but what he is really complaining about are
IMPLICIT closed-world assumptions made by AI hackers in constructing
theories and programs, NOT closed world assumptions that programs make
explicitly and are quite willing to retract when they discover they
are wrong.
Being able to cross the street safely is a noble goal. What does achieving this
goal have to do with "closed world assumption"?
If you think you can get away without closed world
assumptions, recall the example of deciding to cross the street. You
don't know that a jet plane won't crash on you when you are in the
middle of the road, that the earth will not open and swallow you up,
and so forth, yet you decide it is safe to cross anyway. You may, when
falling into a manhole, decide to revise that theory. AI programs
have a hard time with that at present, and that's what I take Carl to
really be complaining about.
∂17-Nov-83 0920 DKANERVA@SRI-AI.ARPA On-line copy of CSLI Newsletter
Received: from SRI-AI by SU-AI with TCP/SMTP; 17 Nov 83 09:19:45 PST
Date: Thu 17 Nov 83 09:18:05-PST
From: DKANERVA@SRI-AI.ARPA
Subject: On-line copy of CSLI Newsletter
To: csli-friends@SRI-AI.ARPA
In case your copy doesn't get through the mailer or you
want back copies of the CSLI Newsletter, you can find them
in the directory <CSLI> in the form <CSLI>NEWSLETTER.<date>.
For example, this week's newsletter is <CSLI>NEWSLETTER.11-17-83.
There has been some problem with garbled transmission.
Please let me know if you have had difficulty receiving good
copies of the newsletter.
-- Dianne Kanerva
-------
∂17-Nov-83 0934 @MIT-MC:DAM%MIT-OZ@MIT-MC TMSing
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 09:34:48 PST
Date: Thu, 17 Nov 1983 12:30 EST
Message-ID: <DAM.11968375594.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: phil-sci%MIT-OZ@MIT-MC.ARPA
Subject: TMSing
The basic reason My TMS has no trouble handling contradictions
is that it views inference as a tool in a larger system designed to
decide what premises to adopt. The problem of deciding what premises
should be adopted is fundamentally outside of logic. Thus there are
lots of computational aspects of my TMS which are extra-logical.
Logic is however a central tool of the system. Logic allows one to
determine the consequences of any particular set of premises.
More technically there are two reasons that the TMS has no
trouble with contradictions. First, it will in fact NOT deduce
everything in the presence of a contradiction (though it can be made
to deduce everything which follows from CONSISTENT premises). Second,
every non-tautological belief is treated as a premise which can be
retracted. This allows contradictions to be removed shortly after
they are identified (i.e. shortly after the system actually proves a
contradiction).
There are two reasons the system does not deduce everything
from a contradiction. First if two sets of axioms do not share any
symbols then they never interact. Thus a contradiction in one
component of a TMS system does not effect beliefs in another
component. Second, the system can never actually believe both A and
~A (a given node has only one truth slot which can be set to T or F
but not both). This fact prevents contradictions from generating
run-away inferences. Instead when a contradiction is identified the
inference process is (in some sense) locally arrested (the clause
directly responsible for the contradiction becomes ineffective).
These properties of my TMS are fortunate side effects of the basic
constraint propogation algorithm used. One would have to go out of
one's way to get this kind of contraint propogation algorithm to do
run-away inference in the face of contradictions.
The second reason contradictions are no problem is that they
are easily removed. The dependencies maintained by the TMS allow one
to identify the precise set of premises (beliefs) which were used in
deriving the contradiction. One can then add a clause which states
that these particulaur premises are mutually inconsistent. This new
clause adds to the deductive power of the TMS and thus a contradiction
can cause the system to "learn" new consequences of its premises. I
think of contradictions as positive things and try to get the system
to derive as many as it can. As Ken pointed out contradictions are
important for refutation reasoning (about closed world assumptions or
whatever).
David Mc
∂17-Nov-83 0948 @MIT-MC:Hewitt%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 09:47:29 PST
Date: Thursday, 17 November 1983, 12:12-EST
From: Carl Hewitt <Hewitt%MIT-OZ@MIT-MC.ARPA>
Subject: limitations of logic
To: John McCarthy <JMC at SU-AI.ARPA>
Cc: phil-sci%oz at λSCRC|DMλ, psz at MIT-ML, Hewitt%MIT-OZ@MIT-MC.ARPA
In-reply-to: The message of 17 Nov 83 02:42-EST from John McCarthy <JMC at SU-AI>
Date: Thursday, 17 November 1983 02:42-EST
From: John McCarthy <JMC at SU-AI>
To: phil-sci%oz at MIT-MC
Re: limitations of logic
I will argue that lots of axiomatizations (note spelling) are consistent.
So far as I know, the statement that they are inconsistent is entirely
unsupported.
I believe that the thesis of the inconsistency of axiomatizations of expert knowledge in
all established branches of science and engineering is supported by the experience of
people working in fields like medical diagnosis. Perhaps we could get Peter Szolovits to
report on his experiences and those of his colleagues.
I assert, however, that axiomatizations of common sense
domains will require non-monotonic reasoning to be strong enough, and
this may be confused with inconsistency by the naive. Domains of
scientific physics will not require non-monotonic reasoning, because
they aspire to a completeness not realizable with common sense domains.
Hewitt, et. al., probably have a potentially useful intuition, but unless they
make the effort to make it as precise as possible, this potential will
not be realized.
Point of information: How well does circumscription work for inconsistent
axiomatizations?
Of course, I didn't hear Hewitt's lecture, but I did
read the "Scientific Community Metaphor" paper and didn't agree with
anything.
What points in the paper did you find particularly disappointing besides the use
of metaphor?
Indeed I didn't find the paper coherent, but then I don't
think metaphors should be offered as arguments; at most they are hints.
I believe that analogies and metaphors are fundamental to reasoning and argument.
To me logical inference alone (without analogies and metaphors) seems sterile and
incomplete.
My remark about non-monotonic reasoning being needed for formalizing
common sense is similar to DAM's remark about the need for making
closed world assumptions and taking them back. Circumscription generalizes
the usual ways of doing this. Incidentally, I now realize that I would
have found it more interesting to debate about the usefulness of logic
with Carl rather than with Roger Schank, who changed his mind about whether
he was willing to debate this subject. Perhaps at M.I.T. some time if
Carl is willing.
Sure! It would be fun.
Cheers,
Carl
∂17-Nov-83 0959 @MIT-MC:DAM%MIT-OZ@MIT-MC The meaning of Theories
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 09:59:36 PST
Date: Thu, 17 Nov 1983 12:44 EST
Message-ID: <DAM.11968378084.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: RICKL%MIT-OZ@MIT-MC.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
Subject: The meaning of Theories
BATALI:
I took Carl's point to be....
(1) Any axiomitization of any non-trivial domain will be
formally inconsistent.
(2) Tarskian semantics assigns no meaning to inconsistent
logical theories......
DAM:
I accept (1) but deny (2). It is individual statements, not
theories, which are given meaning by Tarskian semantics.
RICKL:
If you take this tack, you must describe a procedure by which you can
decide whether to deduce A or ~A whenever you want to know whether A,
for your axiomatization will deduce both.
You seem to have missed my point. I am not talking about
the statements which are the potential consequences of the theory, but
rather the individual statements which make up the theory. Tarskian
semantics provides coherent meanings for individual parts of a theory
even when the theory taken as whole is inconsistent.
David Mc
∂17-Nov-83 1011 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 10:11:33 PST
Date: Thu 17 Nov 83 12:53:43-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
To: KDF%MIT-OZ@MIT-MC.ARPA
cc: DAM%MIT-OZ@MIT-MC.ARPA, DUGHOF%MIT-OZ@MIT-MC.ARPA,
phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "KDF@MIT-OZ" of Thu 17 Nov 83 01:51:57-EST
Date: Thu, 17 Nov 1983 01:51 EST
From: KDF@MIT-OZ
Subject: Re: limitations of logic
What RICKL (and to some degree, Carl) is ignoring in the argument
that "since everything follows from a contradiction, logic is useless"
are non-simplistic procedural renderings of logic.
I don't think that "logic is useless" is quite the argument being
made, for it is obviously an extremely powerful tool. The question is
really whether it *alone* is an appropriate representation for scientific
(or other expert) knowledge. **Clearly** it is important, and any
account of scientific knowledge which failed to give logic a major role
would be incomplete. The question is, what is that role?
In reality, contradictions are quite useful.
Yes, of course. The problem is that *most* contradictions are *not*
useful, and with an inconsistent axiom set you get the useful ones and
the useless ones all mixed together. A procedural rendering of logic
which gave you *only* the useful ones would be a big win.
Without the ability to recognize contradictions, indirect proof is
impossible.
Without a consistent set of axioms, indirect proof is impossible.
I am sympathetic to the notion that knowledge is divisible into (a
hierarchy of) semi-autonomous domains with discoverable boundaries,
e.g. the way science is divided into branches and areas of
specialization. The *attempt* to axiomatize such a domain may be
informative by considering the way the attempt fails, which you allude
to w.r.t. dependency-directed search.
-=*=- rick
-------
∂17-Nov-83 1047 TAJNAI@SU-SCORE.ARPA IBM Wine and Cheese Party for Everyone
Received: from SU-SCORE by SU-AI with TCP/SMTP; 17 Nov 83 10:47:43 PST
Date: Thu 17 Nov 83 10:45:54-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: IBM Wine and Cheese Party for Everyone
To: faculty@SU-SCORE.ARPA
IBM
Yorktown Heights and San Jose Research Labs
Cordially invite
the Faculty, Staff, and Students of CSD, CSL and CIS
to attend a wine and cheese party
Wednesday, November 30
4:30 to 6:30 p.m.
Tresidder Large Lounge
-------
∂17-Nov-83 1127 @MIT-MC:Tong.PA@PARC-MAXC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 11:27:06 PST
Date: 17 Nov 83 11:00 PST (Thursday)
From: Tong.PA@PARC-MAXC.ARPA
Subject: Re: limitations of logic
In-reply-to: JMC@SU-AI.ARPA's message of 16 Nov 83 23:42 PST
To: phil-sci%oz@MIT-MC.ARPA
JMC: I will argue that lots of axiomatizations ... are consistent. So far as I
know, the statement that they are inconsistent is entirely unsupported.
No question, lots of axiomatizations are consistent. But axiomatizations do not
appear out of thin air. A question that has not been raised yet in this discussion
is: How difficult is it for human beings (or machines, for that matter) to create a
consistent (as far as we can tell) axiomatization? The same sort of issues you
raised years ago in distinguishing epistemologically and heuristically adequate
representations apply here, too - if we were to represent the *process* of
creating an ultimately consistent (so far as we know) axiomatiation, would the
intermediate axiomatizations be consistent? Have yours been? Doesn't every
codifier do a lot of erasing? A consistency-preserving representation of the
design process might be epistemologically adequate but would it be heuristically
adequate? Don't we often learn faster when we allow ourselves to make mistakes
than when we spend all our time and energy trying to avoid a fall? That has
been my experience in codifying expert knowledge for circuit design in the
Palladio system. I *could* rationally reconstruct a consistency-preserving process
that would have taken my knowledge base from scratch to where it stands now,
but that wasn't the way it happened. It has been my prefered style (and I'm sure
I'm not the only one!) to jot down/program ideas as they hit me; the
inconsistencies I subsequently discover tend to sharpen my vision of the total
picture.
The point, in short: an axiomatization, be it consistent or inconsistent, is a
designed artifact. Any pragmatic assessment of the product ought to account for
the process that evolved it; any representation of the process ought to account
for the intermediate designs.
Chris
∂17-Nov-83 1144 @MIT-MC:JERRYB%MIT-OZ@MIT-MC [KDF at MIT-AI: limitations of logic]
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 11:43:44 PST
Date: Thu, 17 Nov 1983 14:31 EST
Message-ID: <JERRYB.11968397526.BABYL@MIT-OZ>
From: JERRYB%MIT-OZ@MIT-MC.ARPA
To: KDF%MIT-OZ@MIT-MC.ARPA
cc: DAM%MIT-OZ@MIT-MC.ARPA, DUGHOF%MIT-OZ@MIT-MC.ARPA,
phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA
Subject: [KDF at MIT-AI: limitations of logic]
In-reply-to: Msg of 17 Nov 1983 11:52-EST from Carl Hewitt <Hewitt>
Date: Thursday, 17 November 1983 01:51-EST
From: KDF at MIT-OZ
Re: limitations of logic
In reality, contradictions are quite useful. Without the ability to
recognize contradictions, indirect proof is impossible. Dependency-
directed search explicitly relies on the ability to detect
contradictions. Furthermore, given that one is dealing with empirical
knowledge, analyzing anything requires making assumptions. These
assumptions can be wrong, and arriving at a contradiction lets you
find that out.
One of the problems with most present systems is what they do when
they discover a contradiction. In most cases they resort to an
extra-logical mechanism to resolve the contradiction (such as search
to find another assumption to replace the "wrong" assumption). This
makes it difficult to perform a reasoned analysis of the
contradiction, ie, after a contradiction "logic is useless" in
resolving the contradiction.
In reality, contradictions are quite useful.
For humans they are very useful, most present AI systems however don't
exploit them to their fullest because of the reason cited above.
The Viewpoint mechanism in Omega solves this problem by placing
theories in viewpoints and allowing one to have a logical theory in
viewpoint A about the structure of the (possibly contradictory)
logical theory in viewpoint B. Thus reasoned analysis of logical
contradictions can be performed.
∂17-Nov-83 1421 JF@SU-SCORE.ARPA finding the room
Received: from SU-SCORE by SU-AI with TCP/SMTP; 17 Nov 83 14:21:28 PST
Date: Thu 17 Nov 83 14:16:01-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: finding the room
To: bats@SU-SCORE.ARPA
in order to find the LGI once you get to CERAS, walk in on the ground floor,
then go downstairs. walk past a group of LOTS computer terminals and the
room will be on your left. it is a large auditorium and says "large group
instruction" on it. there is no smoking, eating or drining allowed in the
auditorium. lunch will be delivered to the lobby outside; but i am hoping
that the weather will permit our taking it outside.
see you all monday,
joan
-------
I am puzzled why this message is sent to me. Also I'm puzzled by the
directions. If you come from this end of the campus, you're already
downstairs and LGI is immediately on the right.
∂17-Nov-83 1652 @MIT-MC:KDF%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 16:39:50 PST
Date: Thu, 17 Nov 1983 19:27 EST
Message-ID: <KDF.11968451505.BABYL@MIT-OZ>
From: KDF%MIT-OZ@MIT-MC.ARPA
To: RICKL%MIT-OZ@MIT-MC.ARPA
Cc: DAM%MIT-OZ@MIT-MC.ARPA, DUGHOF%MIT-OZ@MIT-MC.ARPA,
phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
In-reply-to: Msg of Thu 17 Nov 83 12:53:43-EST from RICKL@MIT-OZ
I don't think that "logic is useless" is quite the argument being
made, for it is obviously an extremely powerful tool. The question is
really whether it *alone* is an appropriate representation for scientific
(or other expert) knowledge. **Clearly** it is important, and any
account of scientific knowledge which failed to give logic a major role
would be incomplete. The question is, what is that role?
You're hedging - your earlier claim was that it cannot be used. Saying
"**clearly**" doesn't help.
Yes, of course. The problem is that *most* contradictions are *not*
useful, and with an inconsistent axiom set you get the useful ones and
the useless ones all mixed together. A procedural rendering of logic
which gave you *only* the useful ones would be a big win.
What do you mean by the distinction between "useful" and "useless"
contradictions? Any contradiction is useful, since you can find out
which premises underlie it and thus find out that that particular
subset of your database is inconsistent.
Without a consistent set of axioms, indirect proof is impossible.
"Axioms" is too undifferentiated; let us speak of a domain theory and
the description of some particular situation encoded in that domain.
Suppose we reach a contradiction while reasoning about that situation.
If we cannot discharge it by throwing out particular facts about the
situation, then the particular pieces of the domain theory itself are
implicated as inconsistent. So you have just proved something that
you didn't expect to, namely that you don't understand the domain in
some way.
I am sympathetic to the notion that knowledge is divisible into (a
hierarchy of) semi-autonomous domains with discoverable boundaries,
e.g. the way science is divided into branches and areas of
specialization.
I don't see what that has to do with this discussion.
The *attempt* to axiomatize such a domain may be
informative by considering the way the attempt fails, which you allude
to w.r.t. dependency-directed search.
There is a level confusion here. Dependency-directed search is a
particular mechanism. It does not entail a committment to any
particular global organization of knowledge, or even a particular
representation language - only that there is some notion of
contradiction and some way to discover what premises led to it (i.e.,
some kind of logic).
∂17-Nov-83 1654 @MIT-MC:KDF%MIT-OZ@MIT-MC What to do until clarification comes
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 16:51:45 PST
Date: Thu, 17 Nov 1983 19:40 EST
Message-ID: <KDF.11968453833.BABYL@MIT-OZ>
From: KDF%MIT-OZ@MIT-MC.ARPA
To: JERRYB%MIT-OZ@MIT-MC.ARPA
Cc: DAM%MIT-OZ@MIT-MC.ARPA, DUGHOF%MIT-OZ@MIT-MC.ARPA,
phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA
Subject: What to do until clarification comes
In-reply-to: Msg of Thu 17 Nov 1983 14:31 EST from JERRYB@MIT-OZ
JERRYB is right in taking present AI systems to task for their
response to a contradiction. The worst mechanism is random
retraction, the default seems to be to ask the user. I use a stack of
"contradiction handlers", such that when a contradiction occurs each
handler is polled in turn to see if it wants to take care of it.
Since most of the control in my program is provided by Lisp code
(DAM's TMS is used to do the logic), the stack discipline reflects the
nesting of the various programs which are performing analyses. At the
bottom of the stack lies a handler which asks the user, but this gets
called only in the case of a bug. Not an elegant long-term solution,
but it does work.
I'm sure the viewpoint mechanism in Omega is sufficiently powerful
to allow the kind of meta-reasoning that you allude to, but has anyone
actually done it? If so, how different are the details from the FOL
approach?
∂17-Nov-83 2112 @MIT-MC:HEWITT@MIT-XX I think the new mail system ate the first try
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 21:12:14 PST
Received: from MIT-XX by MIT-OZ via Chaosnet; 18 Nov 83 00:04-EST
Date: Fri, 18 Nov 1983 00:05 EST
Message-ID: <HEWITT.11968501993.BABYL@MIT-XX>
From: HEWITT@MIT-XX
To: phil-sci@MIT-OZ
Reply-to: Hewitt at MIT-XX
Subject: I think the new mail system ate the first try
CC: Hewitt@MIT-XX
Date: Thursday, 17 November 1983, 12:12-EST
From: Carl Hewitt <Hewitt%MIT-OZ at MIT-MC.ARPA>
To: John McCarthy <JMC at SU-AI.ARPA>
cc: phil-sci%oz at λSCRC|DMλ, psz at MIT-ML,
Hewitt%MIT-OZ at MIT-MC.ARPA
Re: limitations of logic
Date: Thursday, 17 November 1983 02:42-EST
From: John McCarthy <JMC at SU-AI>
To: phil-sci%oz at MIT-MC
Re: limitations of logic
I will argue that lots of axiomatizations (note spelling) are consistent.
So far as I know, the statement that they are inconsistent is entirely
unsupported.
I believe that the thesis of the inconsistency of axiomatizations of
expert knowledge in all established branches of science and engineering is
supported by the experience of people working in fields like medical
diagnosis. Perhaps we could get Peter Szolovits to report on his
experiences and those of his colleagues.
I assert, however, that axiomatizations of common sense
domains will require non-monotonic reasoning to be strong enough, and
this may be confused with inconsistency by the naive. Domains of
scientific physics will not require non-monotonic reasoning, because
they aspire to a completeness not realizable with common sense domains.
Hewitt, et. al., probably have a potentially useful intuition, but unless they
make the effort to make it as precise as possible, this potential will
not be realized.
Point of information: How well does circumscription work for inconsistent
axiomatizations?
Of course, I didn't hear Hewitt's lecture, but I did
read the "Scientific Community Metaphor" paper and didn't agree with
anything.
What points in the paper did you find particularly disappointing besides the use
of metaphor?
Indeed I didn't find the paper coherent, but then I don't
think metaphors should be offered as arguments; at most they are hints.
I believe that analogies and metaphors are fundamental to reasoning and argument.
To me logical inference alone (without analogies and metaphors) seems sterile and
incomplete.
My remark about non-monotonic reasoning being needed for formalizing
common sense is similar to DAM's remark about the need for making
closed world assumptions and taking them back. Circumscription generalizes
the usual ways of doing this. Incidentally, I now realize that I would
have found it more interesting to debate about the usefulness of logic
with Carl rather than with Roger Schank, who changed his mind about whether
he was willing to debate this subject. Perhaps at M.I.T. some time if
Carl is willing.
Sure! It would be fun.
Cheers,
Carl
17
∂17-Nov-83 2135 @MIT-MC:HEWITT@MIT-XX The meaning of Theories
Received: from MIT-MC by SU-AI with TCP/SMTP; 17 Nov 83 21:35:14 PST
Received: from MIT-XX by MIT-OZ via Chaosnet; 18 Nov 83 00:30-EST
Date: Fri, 18 Nov 1983 00:27 EST
Message-ID: <HEWITT.11968505996.BABYL@MIT-XX>
From: HEWITT@MIT-XX
To: DAM@MIT-OZ
Cc: Hewitt@MIT-XX, phil-sci@MIT-OZ, RICKL@MIT-OZ
Reply-to: Hewitt at MIT-XX
Subject: The meaning of Theories
In-reply-to: Msg of 17 Nov 1983 12:44-EST from DAM at MIT-OZ
Date: Thursday, 17 November 1983 12:44-EST
From: DAM at MIT-OZ
To: RICKL at MIT-OZ
cc: phil-sci at MIT-OZ
Re: The meaning of Theories
BATALI:
I took Carl's point to be....
(1) Any axiomitization of any non-trivial domain will be
formally inconsistent.
(2) Tarskian semantics assigns no meaning to inconsistent
logical theories......
I am not talking about
the statements which are the potential consequences of the theory, but
rather the individual statements which make up the theory. Tarskian
semantics provides coherent meanings for individual parts of a theory
even when the theory taken as whole is inconsistent.
I can't figure out what you have in mind. Could you give an example?
Thanks,
Carl
∂18-Nov-83 0927 PATASHNIK@SU-SCORE.ARPA student bureaucrat electronic address
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Nov 83 09:27:26 PST
Date: Fri 18 Nov 83 09:22:09-PST
From: Student Bureaucrats <PATASHNIK@SU-SCORE.ARPA>
Subject: student bureaucrat electronic address
To: students@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA, secretaries@SU-SCORE.ARPA,
research-associates@SU-SCORE.ARPA
cc: bureaucrat@SU-SCORE.ARPA
Reply-To: bureaucrat@score
The new bureaucrat electronic address is bureaucrat@score. Mail
sent to bureaucrat on sail or to bur or bureaucrat on diablo or
navajo will be forwarded there.
Oren and Yoni, bureaucrats
-------
∂18-Nov-83 0936 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Nov 83 09:36:38 PST
Date: Fri 18 Nov 83 12:28:46-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
To: KDF%MIT-OZ@MIT-MC.ARPA
cc: DAM%MIT-OZ@MIT-MC.ARPA, DUGHOF%MIT-OZ@MIT-MC.ARPA,
phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "KDF@MIT-OZ" of Thu 17 Nov 83 19:29:58-EST
Date: Thu, 17 Nov 1983 19:27 EST
From: KDF@MIT-OZ
Subject: Re: limitations of logic
KDF:
Without the ability to
recognize contradictions, indirect proof is impossible.
RICKL:
Without a consistent set of axioms, indirect proof is impossible.
KDF:
"Axioms" is too undifferentiated; let us speak of a domain theory....
So you have just proved something that
you didn't expect to, namely that you don't understand the domain in
some way.
While research is ongoing you already know that you don't understand
the domain in some way. This is why formal axiomatizations of a
domain of science *always* occur *after* the domain has been accepted
"scientific knowledge" for a long long time, and is essentially
dormant as a field for ongoing research. But the domain has already
been *primarily* understood for a long long time before this.
*Before* a formal axiomatization is done, the domain knowledge is
*already* being used, and *understood*, by working scientists.
Conclusion: formal axiomatization is not necessary to the effective
discovery, use, or understanding of scientific knowledge by scientists.
However: this should not be construed as saying logic is useless.
-=*=- rick
-------
∂18-Nov-83 1006 @MIT-MC:KDF%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Nov 83 10:05:59 PST
Date: Fri, 18 Nov 1983 13:02 EST
Message-ID: <KDF.11968643467.BABYL@MIT-OZ>
From: KDF%MIT-OZ@MIT-MC.ARPA
To: RICKL%MIT-OZ@MIT-MC.ARPA
Cc: DAM%MIT-OZ@MIT-MC.ARPA, DUGHOF%MIT-OZ@MIT-MC.ARPA,
phil-sci%MIT-OZ@MIT-MC.ARPA, PHW%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
In-reply-to: Msg of Fri 18 Nov 83 12:28:46-EST from RICKL@MIT-OZ
RICKL:
While research is ongoing you already know that you don't understand
the domain in some way. This is why formal axiomatizations of a
domain of science *always* occur *after* the domain has been accepted
"scientific knowledge" for a long long time, and is essentially
dormant as a field for ongoing research.
Mathematics, at the very least in recent times, is of course an
exception. See Kline's book "Mathematics:The Loss of Certainty" for
some interesting arguments that mathematics is "just" another science.
And people don't necessarily need explicit formalization to be contradiction
driven - they just call them "controversy" instead.
But the domain has already
been *primarily* understood for a long long time before this.
*Before* a formal axiomatization is done, the domain knowledge is
*already* being used, and *understood*, by working scientists.
You are confusing the explicit construction of axioms by scientists
with the characterization of their internal reprensentations. Their
"primary" understanding (whatever you might mean by that) might be
well expressed in terms of formal logic despite any supposed lack of
skill at using it formally - a rock, after all, cannot solve
differential equations - it just behaves in a way that can be
described by them.
However the stuff inside your head is represented, if it does not
handle disjunction, conjunction, implication, and provide some notion
of consistency, then it will not be able to handle things that we know
we can represent and reason about (see Bob Moore's MS thesis, or his
more recent AAAI paper, for detailed examples and arguments). This
doesn't mean that the above criteria are sufficient, only necessary.
In short, Boole was probably right.
Conclusion: formal axiomatization is not necessary to the effective
discovery, use, or understanding of scientific knowledge by scientists.
-=*=- rick
-------
∂18-Nov-83 1025 BMACKEN@SRI-AI.ARPA Meetings with the Advisory Panel
Received: from SRI-AI by SU-AI with TCP/SMTP; 18 Nov 83 10:24:59 PST
Date: Fri 18 Nov 83 10:24:42-PST
From: BMACKEN@SRI-AI.ARPA
Subject: Meetings with the Advisory Panel
To: csli-folks@SRI-AI.ARPA
The Panel agreed th they would like to meet with the Area groups
this afternoon. All of you who are interested are welcome, and
feel free to attend more than one session. The times are:
Area B: 1:30-2
Area C: 2-2:30
Area D: 2:30-3
Area A: 3-3:30
The meetings will be in the Ventura Conference Room.
We hope all of you -- princi, associates, visitors, etc. will
attend at least one session.
Don't forget the wine and cheese reception to follow; some of our
experts have selected some nice wines for you to taste.
B.
-------
∂18-Nov-83 1025 @MIT-MC:DIETTERICH@SUMEX-AIM Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Nov 83 10:24:32 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 18 Nov 83 13:17-EST
Date: Fri 18 Nov 83 10:13:58-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: limitations of logic
To: RICKL%MIT-OZ@MIT-MC.ARPA
cc: DIETTERICH@SUMEX-AIM.ARPA, phil-sci%mit-oz@MIT-MC.ARPA
In-Reply-To: Message from "RICKL%MIT-OZ@MIT-MC.ARPA" of Fri 18 Nov 83 09:39:49-PST
RICKL:
While research is ongoing you already know that you don't understand
the domain in some way. This is why formal axiomatizations of a
domain of science *always* occur *after* the domain has been accepted
"scientific knowledge" for a long long time, and is essentially
dormant as a field for ongoing research. But the domain has already
been *primarily* understood for a long long time before this.
*Before* a formal axiomatization is done, the domain knowledge is
*already* being used, and *understood*, by working scientists.
There is no reason why partial knowledge cannot be axiomatized in
logic. In fact, this is one of the beauties of Tarskian model theory:
it shows that EVERY theory gives only incomplete knowledge. Logic
(algebraic) representations are usually superior to "analogical" or
direct representations in this regard. The major difficulty is that
"rational", but non-deductive, theory change is still not understood.
And changes that feel conceptually simple to us may be syntactically
complex in logic-style representations. We've known this for a long
time: We need representations in which simple ideas can be expressed
simply. Perhaps the right question to ask in this discussion is: Are
there things that can be expressed simply in ACTOR-like formalisms
that are awkward (not necessarily impossible) to express in logic?
The fact that explicit, articulated logical axiomatizations emerge
late in the research process does not necessarily indicate anything
about what may be going on inside the researcher's head. Several
alternative explanations are possible: (a) Such theories were always
there, but noone took the time (or had the introspective access to
them) to write them down, (b) Some other logical representation was
being employed (e.g., one that was more data-centered) and we are
witnessing the last in a series of reformulations of that
representation, (c) Scientists are sloppy: if they had taken the time
to axiomatize things sooner, they would have made progress faster,
etc. I don't believe entirely any of these explanations. My point is
that RICKL has not properly argued his point.
--Tom
-------
∂18-Nov-83 1033 JF@SU-SCORE.ARPA finding the room for BATS
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 Nov 83 10:33:25 PST
Date: Fri 18 Nov 83 10:32:35-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: finding the room for BATS
To: aflb.su@SU-SCORE.ARPA
as jmc was kind enough to point out to me, the directions I gave to the
CERAS large group instruction room in which bats will be held next week
are more useful for non-stanford people who will be coming in from the
parking lot between Bowdoin and Cowell. for those of you coming in from
jacks, just get to the lower level of CERAS--the lgi is on the other side
of the building from the LOTS-B terminal room. it is easy to find.
see you monday.
joan
-------
∂18-Nov-83 1056 @MIT-MC:DAM%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Nov 83 10:55:56 PST
Date: Fri, 18 Nov 1983 13:37 EST
Message-ID: <DAM.11968649814.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: phil-sci%MIT-OZ@MIT-MC.ARPA
Subject: limitations of logic
Date: Friday, 18 November 1983 12:28-EST
From: RICKL
While research is ongoing you already know that you don't understand
the domain in some way. This is why formal axiomatizations of a
domain of science *always* occur *after* the domain has been accepted
"scientific knowledge" for a long long time before this.
Conclusion: formal axiomatization is not necessary to the effective
discovery, use, or understanding of scientific knowledge by
scientists.
As KDF points out your conclusion depends on the assumption
that people ONLY use logic when they actually write P's and Q's. This
assumption is COMPLETELY unwarrented. It is possible to make a
distinction between precision and formalism. Scientists (and
mathematicicians) rarely use first order logic explicitly. However
their thinking is usually PRECISE. Are you claiming that physicists
don't use mathematics until they completely understand a domain? It
seems likely to me that all precise mathematical thinking involves
some form of INTERNAL representation which would be called a formal
logic (though perhaps not first order predicate calculus). Do you
feel that precise thinking is only used "a long long time" after a
theory has become "accepted scientific knowledge"?
David Mc
∂18-Nov-83 1110 @MIT-MC:DAM%MIT-OZ@MIT-MC The meaning of Theories
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Nov 83 11:09:47 PST
Date: Fri, 18 Nov 1983 13:45 EST
Message-ID: <DAM.11968651338.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: phil-sci%MIT-OZ@MIT-MC.ARPA
Subject: The meaning of Theories
DAM:
Tarskian semantics provides coherent meanings for individual parts of
a theory even when the theory taken as whole is inconsistent.
Hewitt:
I can't figure out what you have in mind. Could you give an example?
EXAMPLE: The theory T is:
"All birds fly"
"George is an Ostrich"
"All Ostriches are birds"
"No Ostriches fly"
Assume each statement has been appropriately translated into
FOPC. The theory is inconsistent and has no models. However each
statement when considered by itself is given a coherent meaning by
Tarskian semantics.
David Mc
∂18-Nov-83 1139 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Craig Smorynski, San Jose State U.
TITLE: Self-Reference and Bi-Modal Logic
TIME: Wednesday, November 23, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
Abstract :
Some results from the modal and bi-modal analysis of self-reference
in arithmetic are discussed. This includes work of Solovay,
Carlson, and the speaker.
Coming Events:
November 30, J.E. Fenstad
∂18-Nov-83 1147 @MIT-MC:Agha%MIT-OZ@MIT-MC First-Order logic and Human Knowledge
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Nov 83 11:44:24 PST
Received: from MIT-APIARY-5 by MIT-OZ via Chaosnet; 18 Nov 83 14:30-EST
Date: Friday, 18 November 1983, 14:31-EST
From: Agha%MIT-OZ@MIT-MC.ARPA
Subject: First-Order logic and Human Knowledge
To: phil-sci%MIT-OZ@MIT-MC.ARPA
An interesting discussion has ensued from Carl's provocative talk last
week. The emphasis place by some participants seems rather misplaced. The
question is not whether theories in the abstract can be consistent or
otherwise, for obviously some theories in some domains (such as first-order
logic) may be consistent. The question is if theories about the real world
are expected to be inherently inconsistent. The conjecture is that if the
domain of a theory in the real world is sufficiently complex to be of
interest to Artificial Intelligence, then any axiomatization of the theory
will be inconsistent.
One line of argument offerred in support of this conjecture is empirical.
Another argument is epistemilogical. I hold that the nature of knowledge
about the real world is such that propositions that may be used as axioms
are almost invariably overgeneralizations and therefore tend to be false
in the meta-theoretic sense. To use Carl's example, let us suppose the
following are known "facts" about the world :
1. A person is sleepy if s(he) has been awake 24 hours.
2. A person is not sleepy if s(he) has an exam the following morning.
Now if a system uses these two facts as axioms and encounters a query about
John who has been awake 24 hours and has an exam the following morning it
will have an inconsistency if it acknowledges the given fact about John.
The question arises on how to deal with this inconsistency. One approach
would be to modify the axioms so that the premise of the first excludes the
premise of the second and vice-versa. This approach is highly impractical
because one would in principle have to perpetually modify all axioms to
exclude each other's premises. So even if logic helps in pinpointing which
axioms contradict each other, it is not clear what to do about it. Certainly,
one does not wish everything to be deducible in a system simply because
two propostions contradict each other. If 1 and 2 above contradict 3, that's
no reason for the system to be able to conclude apples are purple. It may
even be advisable not to modify 1 and 2!
What is needed then is a description system which deals the nature of facts in the real world
and not (first-order) logic alone. I do not see how multi-valued logic
addresses this issue either.
Comments welcome!
Gul Agha.
,
∂18-Nov-83 1209 @SRI-AI.ARPA:CLT@SU-AI SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
Received: from SRI-AI by SU-AI with TCP/SMTP; 18 Nov 83 12:09:32 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Fri 18 Nov 83 11:44:38-PST
Date: 18 Nov 83 1139 PST
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Craig Smorynski, San Jose State U.
TITLE: Self-Reference and Bi-Modal Logic
TIME: Wednesday, November 23, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
Abstract :
Some results from the modal and bi-modal analysis of self-reference
in arithmetic are discussed. This includes work of Solovay,
Carlson, and the speaker.
Coming Events:
November 30, J.E. Fenstad
∂19-Nov-83 0106 NET-ORIGIN@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 19 Nov 83 01:06:09 PST
Received: from MIT-MOON by MIT-OZ via Chaosnet; 19 Nov 83 04:02-EST
Date: Saturday, 19 November 1983, 04:06-EST
From: jcma@λSCRC|DMλ
Subject: Re: limitations of logic
To: RICKL%MIT-OZ@MIT-MC.ARPA
Cc: phil-sci%MIT-OZ@MIT-MC.ARPA
In-reply-to: The message of 18 Nov 83 12:01-EST from RICKL at MIT-AI
From: RICKL@MIT-OZ
Subject: Re: limitations of logic
I am arguing that logic is important, but that formal logical axiomatization
is not the *primary* basis for our understanding of science.
I agree with you on this limitation because I don't see an assymptotically
complete representation of scientific method contianed in logic. That is, if
logic came with an adequate representation and heuristics for performing
scientific research, it alone would be sufficient to perform scientific
inquiry (assuming it had the correct model). But since it doesn't, logic
remains one of various powerful tools for conducting scientific inquiry. Of
course, if it did, the AI problem would be solved and philosophy of science
would be synonomous with logic [something I think T.S. Khun might argue with].
∂19-Nov-83 1533 ARK@SU-SCORE.ARPA reminder
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Nov 83 15:33:33 PST
Date: Sat 19 Nov 83 15:31:11-PST
From: Arthur Keller <ARK@SU-SCORE.ARPA>
Subject: reminder
To: bats@SU-SCORE.ARPA
one last reminder about monday's meeting.
place:
-------
∂19-Nov-83 1537 ARK@SU-SCORE.ARPA reminder
Received: from SU-SCORE by SU-AI with TCP/SMTP; 19 Nov 83 15:37:28 PST
Date: Sat 19 Nov 83 15:35:17-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: reminder
Sender: ARK@SU-SCORE.ARPA
To: bats@SU-SCORE.ARPA
Reply-To: JF@SCORE
sorry about that.
as i was saying, one last reminder about monday's meeting.
place: CERAS LGI, Stanford
time: 10-5
schedule:
10: Dan Greene
11: Gabi Kuper
12: Lunch
1: Nick Pippinger
2: Andrey Goldberg
3: Coffee Break
3:30: Allen Goldberg
See you all there.
joan
-------
∂19-Nov-83 2258 @MIT-MC:Laws@SRI-AI Overlap with AIList
Received: from MIT-MC by SU-AI with TCP/SMTP; 19 Nov 83 22:58:00 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 20 Nov 83 01:54-EST
Date: Sat 19 Nov 83 22:52:43-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Overlap with AIList
To: Phil-Sci%MIT-OZ@MIT-MC.ARPA
Greetings. I believe that most of you are familiar with AIList,
which I moderate. The recent Phil-Sci discussion has touched on
several AI topics other than logic and the foundations of science.
I would like to alert my readers to the discussion, and am unsure
of the best way to do so.
I am hesitant to simply announce that Phil-Sci is active again.
My message would reach AIList readers long after the Phil-Sci
discussion triggering it, so that interested parties would have
to access the Phil-Sci archives; I doubt that anyone would do so.
Further, any who now join Phil-Sci might well miss the "interesting"
AI-related discussion, and would have to scan weeks or months of
unrelated discussions before giving up and dropping out. (No offense
meant. I am specifically disregarding those who are interested in
foundations of logic and the philosophy of science, and who should
already be Phil-Sci members. For their benefit I have mentioned
Phil-Sci in early AIList issues.)
On the other hand, I don't want the entire Phil-Sci discussion
cc'd to AIList. While our lists have much in common, I don't
mean for AIList to usurp every net discussion of logic, mathematics,
computer science, art, and other subjects pertinent to AI.
What I would like to do is to select a few messages that seem
particularly pertinent to AIList, reprint them as a special issue,
and suggest that anyone wanting to follow the discussion further
should join Phil-Sci. I have so far selected 11 messages by 8
authors from the recent flurry on knowledge representation and
inconsistency. While such a culling necessarily omits some of the
context of the original discussion, I believe the sampling would
be fair to the authors.
Are there objections to my reprinting material in this manner?
I would prefer to obtain approval en masse since my experience
indicates that dickering with authors about the contents of specific
messages takes excessive time and effort. AIList, like Phil-Sci,
is a trace of ongoing discussions rather than a repository for
edited presentations. AIList welcomes position statements or
careful summaries, but the real-time stream is also important.
-- Ken Laws
-------
∂20-Nov-83 1008 PETERS@SRI-AI.ARPA Building Planning
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Nov 83 10:08:14 PST
Date: Sun 20 Nov 83 09:59:07-PST
From: Stanley Peters <PETERS@SRI-AI.ARPA>
Subject: Building Planning
To: csli-folks@SRI-AI.ARPA
Charles Smith has indicated to the University that with high
probability the System Development Foundation will provide funds
for a "simple" 2 million dollar frame structure adjacent to
Ventura and Casita or a larger, better looking structure that,
together with remodeling or replacement of Casita and remodeling
of Ventura, would give a unified appearance to the site. (The
latter might cost 4 to 5 million dollars.)
We are now in the prcocess of selecting an architect who will
provide schematic drawings and estimates for both types of
structures.
At this point, the University wants input from us about the type
of environment we wish the building to provide. How should the
building feel to people working there -- comfortable? friendly?
businesslike? What kind of work spaces are needed -- enclosed
offices? open spaces? common areas? Will the building be used
24 hours per day? What sort of security do we need? What are
the trade-offs between parking space and a larger building?
This is the time for all of you to send Betsy Macken
(BMACKEN@SRI-AI) your thoughts about these and similar issues.
Many of you have ideas about the pros and cons of working
environments you have experienced as well as definite ideas about
what you want for CSLI. Please send her your ideas as soon as
possible so she can incorporate them into the architect selection
process.
-------
∂20-Nov-83 1722 LAWS@SRI-AI.ARPA AIList Digest V1 #100
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Nov 83 17:21:47 PST
Date: Sunday, November 20, 1983 2:53PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #100
To: AIList@SRI-AI
AIList Digest Sunday, 20 Nov 1983 Volume 1 : Issue 100
Today's Topics:
Intelligence - Definition & Msc.,
Looping Problem - The Zahir,
Scientific Method - Psychology
----------------------------------------------------------------------
Date: Wed, 16 Nov 1983 10:48:34 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications Mgr.)
Subject: Intelligence and Categorization
I think Tom Portegys' comment in 1:98 is very true. Knowing whether or
not a thing is intelligent, has a soul, etc., is quite helpful in letting
us categorize it. And, without that categorization, we're unable to know
how to understand it. Two minor asides that might be relevant in this
regard:
1) There's a school of thought in the fields of linguistics, folklore,
anthropology, and folklore, which is based on the notion (admittedly arguable)
that the only way to truly understand a culture is to first record and
understand its native categories, as these structure both its language and its
thought, at many levels. (This ties in to the Sapir-Whorf hypothesis that
language structures culture, not the reverse...) From what I've read in this
area, there is definite validity in this approach. So, if it's reasonable to
try and understand a culture in terms of its categories (which may or may not
be translatable into our own culture's categories, of course), then it's
equally reasonable for us to need to categorize new things so that we can
understand them within our existing framework.
2) Back in medieval times, there was a concept known as the "Great
Chain of Being", which essentially stated that everything had its place in
the scheme of things; at the bottom of the chain were inanimate things, at the
top was God, and the various flora and fauna were in-between. This set of
categories structured a lot of medieval thinking, and had major influences on
Western thought in general, including thought about the nature of intelligence.
Though the viewpoint implicit in this theory isn't widely held any more, it's
still around in other, more modern, theories, but at a "subconscious" level.
As a result, the notion of 'machine intelligence' can be a troubling one,
because it implies that the inanimate is being relocated in the chain to a
position nearly equal to that of man.
I'm ranging a bit far afield here, but this ought to provoke some discussion...
Dave Axler
------------------------------
Date: 15 Nov 83 15:11:32-PST (Tue)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.Pucc-K.ags @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: pucc-k.115
Faster = More Intelligent. Now there's an interesting premise...
According to relativity theory, clocks (and bodily processes, and everything
else) run faster at the top of a mountain or on a plane than they do at sea
level. This has been experimentally confirmed.
Thus it seems that one can become more intelligent merely by climbing a
mountain. Of course the effect is temporary...
Maybe this is why we always see cartoons about people climbing mountains to
inquire about "the meaning of life" (?)
Dave Seaman
..!pur-ee!pucc-k!ags
------------------------------
Date: 17 Nov 83 16:38 EST
From: Jim Lynch <jimlynch@nswc-wo>
Subject: Continuing Debate (discussion) on intelligence.
I have enjoyed the continuing discussion concerning the definition of
intelligence and would only add a few thoughts.
1. I tend to agree with Minsky that intelligence is a social concept,
but I believe that it is probably even more of an emotional one. Intelligence
seems to fall in the same category with notions such as beauty, goodness,
pleasant, etc. These concepts are personal, intensely so, and difficult to
describe, especially in any sort of quantitative terms.
2. A good part of the difficulty with defining Artificial Intelligence is
due, no doubt, to a lack of a good definition for intelligence. We probablyy
cannot define AI until the psychologists define "I".
3. Continuing with 2, the definition probably should not worry us too much.
After all, do psychologists worry about "Natural Computation"? Let us let the
psychologists worry about what intelligence is, let us worry about how to make
it artificial!! (As has been pointed out many times, this is certainly an
iterative process and we can surely learn much from each other!).
4. The notion of intelligence seems to be a continuum; it is doubtful
that we can define a crisp and fine line dividing the intelligent from the
non-intelligent. The current debate has provided enough examples to make
this clear. Our job, therefore, is not to make computers intelligent, but
to make them more intelligent.
Thanks for the opportunity to comment,
Jim Lynch, Dahlgren, Virginia
------------------------------
Date: Thu 17 Nov 83 16:07:41-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Intelligence
I had some difficultly refuting a friend's argument that intelligence
is "problem solving ability", and that deciding what problems to solve
is just one facet or level of intelligence. I realize that this is
a vague definition, but does anyone have a refutation?
I think we can take for granted that summing the same numbers over and
over is not more intelligent than summing them once. Discovering a
new method of summing them (e.g., finding a pattern and a formula for
taking advantage of it) is intelligent, however. To some extent,
then, the novelty of the problem and the methods used in its solution
must be taken into account.
Suppose that we define intelligence in terms of the problem-solving
techniques available in an entity's repertoire. A machine's intelligence
could be described much as a pocket calculator's capabilities are:
this one has modus ponens, that one can manipulate limits of series.
The partial ordering of such capabilities must necessarily be goal-
dependent and so should be left to the purchaser.
I agree with the AIList reader who defined an intelligent entity as
one that builds and refines knowledge structures representing its world.
Ability to manipulate and interconvert particular knowledge structures
fits well into the capability rating system above. Learning, or ability
to remember new techniques so that they need not be rederived, is
downplayed in this view of intelligence, although I am sure that it is
more than just an efficiency hack. Problem solving speed seems to be
orthogonal to the capability dimension, as does motivation to solve
problems.
-- Ken Laws
------------------------------
Date: 16 Nov 83 4:21:55-PST (Wed)
From: harpo!seismo!philabs!linus!utzoo!utcsstat!laura @ Ucb-Vax
Subject: KILLING THINGS
Article-I.D.: utcsstat.1439
I think that one has to make a distinction between dolphins killing fish
to eat, and hypothetical turtles killing rabbits, not to eat, but because
they compete for the same land resources. To my mind they are different
sorts of killings (though from the point of veiw of the hapless rabbit
or fish they may be the same). Dolphins kill sharks that attack the school,
though -- I do not think that this 'self-defense' killing is the same as
the planned extermination of another species.
if you believe that planned extermination is the definition of intelligence
then I'll bet you are worried about SETI. On the other hand, I suppose you
must not believe that pacifist vegetarian monks qualify as intelligent.
Or is intelligence something posessed by a species rather than an individual?
Or perhaps you see that eating plants is indeed killing them. Now, we
have, defined all animals and plants like the venus fly-trap as intelligent
while most plants are not. All the protists that I can think of right now
would also be intelligent, though a euglena would be an interesting case.
I think that "killing things" is either too general or too specific
(depending on your definition of killing and which things you admit
to your list of "things") to be a useful guide for intelligence.
What about having fun? Perhaps the ability to laugh is the dividing point
between man (as a higher intelligence) and animals, who seem to have
some appreciation for pleasure (if not fun) as distinct from plants and
protists whose joy I have never seen measured. Dolphins seem to have
a sense of fun as well, which is (to my mind) a very good thing.
What this bodes for Mr. Spock, though, is not nice. And despite
megabytes of net.jokes, this 11/70 isn't chuckling. :-)
Laura Creighton
utzoo!utcsstat!laura
------------------------------
Date: Sun 20 Nov 83 02:24:00-CST
From: Aaron Temin <CS.Temin@UTEXAS-20.ARPA>
Subject: Re: Artificial Humanity
I found these errors really interesting.
I would think a better rule for Eurisko to have used in the bounds
checking case would be to keep the bounds-checking code, but use it less
frequently, only when it was about to announce something as interesting,
for instance. Then it may have caught the flip-flop error itself, while
still gaining speed other times.
The "credit assignment bug" makes me think Eurisko is emulating some
professors I have heard of....
The person bug doesn't even have to be bug. The rule assumes that if a
person is around, then he or she will answer a question typed to a
console, perhaps? Rather it should state that if a person is around,
Eurisko should ask THAT person the question. Thus if Eurisko is a
person, it should have asked itself (not real useful, maybe, but less of
a bug, I think).
While computer enthusiasts like to speak of all programs in
anthropomorphic terms, Eurisko seems like one that might really deserve
that. Anyone know of any others?
-aaron
------------------------------
Date: 13 Nov 83 10:58:40-PST (Sun)
From: ihnp4!houxm!hogpc!houti!ariel!vax135!cornell!uw-beaver!tektronix
!ucbcad!notes @ Ucb-Vax
Subject: Re: the halting problem in history - (nf)
Article-I.D.: ucbcad.775
Halting problem, lethal infinite loops in consciousness, and the Zahir:
Borges' "Zahir" story was interesting, but the above comment shows just
how successful Borges is in his stylistic approach: by overwhelming the
reader with historical references, he lends legitimacy to an idea that
might only be his own. Try tracking down some of his references some-
time--it's not easy! Many of them are simply made up.
Michael Turner (ucbvax!ucbesvax.turner)
------------------------------
Date: 17 Nov 83 13:50:54-PST (Thu)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: I recall Rational Psychology
Article-I.D.: ncsu.2407
First, let's not revive the Rational Psychology debate. It died of natural
causes, and we should not disturb its immortal soul. However, F Montalvo
has said something very unpleasant about me, and I'm not quite mature
enough to ignore it.
I was not making an idle attack, nor do I do so with superficial knowledge.
Further, I have made quite similar statements in the presence of the
enemy -- card carrying psychologists. Those psychologists whose egos are
secure often agree with the assesment. Proper scientific method is very
hard to apply in the face of stunning lack of understanding or hard,
testable theories. Most proper experiments are morally unacceptable in
the pschological arena. As it is, there are so many controls not done,
so many sources of artifact, so much use of statistics to try to ferret
out hoped-for correlations, so much unavoidable anthropomorphism. As with
scholars such as H. Dumpty, you can define "science" to mean what you like,
but I think most psychological work fails the test.
One more thing, It's pretty immature to assume that someone who disagrees
with you has only superficial knowledge of the subject. (See, I told you
I was not very mature ....)
----GaryFostel----
------------------------------
End of AIList Digest
********************
∂20-Nov-83 2100 LAWS@SRI-AI.ARPA AIList Digest V1 #101
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Nov 83 20:59:28 PST
Date: Sunday, November 20, 1983 3:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #101
To: AIList@SRI-AI
AIList Digest Monday, 21 Nov 1983 Volume 1 : Issue 101
Today's Topics:
Pattern Recognition - Forced Matching,
Workstations - VAX,
Alert - Computer Vision,
Correction - AI Labs in IEEE Spectrum,
AI - Challenge,
Conferences - Announcements and Calls for Papers
----------------------------------------------------------------------
Date: Wed, 16 Nov 83 10:53 EST
From: Tim Finin <Tim.UPenn@Rand-Relay>
Subject: pattern matchers
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Pattern Matchers
... My next puzzle is about pattern matchers. Has anyone looked carefully
at the notion of a "non-failing" pattern matcher? By that I mean one that
never or almost never rejects things as non-matching. ...
There is a long history of matchers which can be asked to "force" a match.
In this mode, the matcher is given two objects and returns a description
of what things would have to be true for the two objects to match. Two such
matchers come immediately to my mind - see "How can MERLIN Understand?" by
Moore and Newell in Gregg (ed), Knowledge and Cognition, 1973, and also
"An Overview of KRL, A Knowledge Representation Language" by Bobrow and
Winograd (which appeared in the AI Journal, I believe, in 76 or 77).
------------------------------
Date: Fri 18 Nov 83 09:31:38-CST
From: CS.DENNEY@UTEXAS-20.ARPA
Subject: VAX Workstations
I am looking for information on the merits (or lack of) of the
VAX Workstation 100 for AI development.
------------------------------
Date: Wed, 16 Nov 83 22:22:03 pst
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Computer Vision.
There have been some recent articles in this list on computer
vision, some of them queries for information. Although I am
not in this field, I read with interest a review article in
Nature last week. Since Nature may be off the beaten track for
many people in AI (in fact articles impinging on computer science
are rare, and this one probably got in because it also falls
under neuroscience), I'm bringing the article to the attention of
this list. The review is entitled ``Parallel visual computation''
and appears in Vol 306, No 5938 (3-9 November), page 21. The
authors are Dana H Ballard, Geoffrey E Hinton and Terrence J
Sejnowski. There are 72 references into the literature.
Harry Weeks
g.weeks@Berkeley
------------------------------
Date: 17 Nov 83 20:25:30-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: IEEE Spectrum Alert - (nf)
Article-I.D.: uiucdcs.3909
For safety's sake, let me add a qualification about the table on sources of
funding: it's incorrect. The University of Illinois is represented as having
absolutely NO research in 5th-generation AI, not even under OTHER funding.
This is false, and will hopefully be rectified in the next issue of the
Spectrum. I believe a delegation of our Professors is flying to the coast to
have a chat with the Spectrum staff ...
If we can be so misrepresented, I wonder how the survey obtained its
information. None of our major AI researchers remember any attempts to survey
their work.
Marcel Schoppers
U of Illinois @ Urbana-Champaign
------------------------------
Date: 17 Nov 83 20:25:38-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: uiucdcs.3910
I agree [with a previous article].
I myself am becoming increasingly worried about a blithe attitude I
sometimes hear: if our technology eliminates some jobs, it will create others.
True, but not everyone will be capable of keeping up with the change.
Analogously, the Industrial Revolution is now seen as a Good Thing, and its
impacts were as profound as those promised by AI. And though it is said that
the growth of knowledge can only be advantageous in the long run (Logical
Positivist view?), many people became victims of the Revolution.
In this respect I very much appreciated an idea that was aired at IJCAI-83,
namely that we should be building expert systems in economics to help us plan
and control the effects of our research.
As for the localization of power, that seems almost inevitable. Does not the
US spend enough on cosmetics to cover the combined Gross National Products of
37 African countries? And are we not so concerned about our Almighty Pocket
that we simply CANNOT export our excess groceries to a needy country, though
the produce rot on our dock? Then we can also keep our technology to ourselves.
One very obvious, and in my opinion sorely needed, application of AI is to
automating legal, veterinary and medical expertise. Of course the law system
and our own doctors will give us hell for this, but on the other hand what kind
of service profession is it that will not serve except at high cost? Those most
in need cannot afford the price. See for yourself what kind of person makes it
through Medical School: those who are most aggressive about beating their
fellow students, or those who have the money to buy their way in. It is little
wonder that so few of them will help the under-priviledged -- from the start
the selection criteria wage against such motivation. Let's send our machines
in where our "doctors" will not go!
Marcel Schoppers
U of Illinois @ Urbana-Champaign
------------------------------
Date: 19 Nov 83 09:22:42 EST (Sat)
From: rej@Cornell (Ralph Johnson)
Subject: The AI Challenge
The recent discussions on AIlist have been boring, so I have another
idea for discussion. I see no evidence that that AI is going to make
as much of a change on the world as data processing or information
retrieval. While research in AI has produced many results in side areas
such as computer languages, computer architecture, and programming
environments, none of the past promises of AI (automatic language
translation, for example) have been fulfilled. Why should I expect
anything more in the future?
I am a soon-to-graduate PhD candidate at Cornell. Since Cornell puts
little emphasis on AI, I decided to learn a little on my own. Most AI
literature is hard to read, as very little concrete is said. The best
book that I read (best for someone like me, that is) was the three-volume
"Handbook on Artificial Intelligence". One interesting observation was
that I already knew a large percentage of the algorithms. I did not
even think of most of them as being AI algorithms. The searching
algorithms (with the exception of alpha beta pruning) are used in many
areas, and algorithms that do logical deduction are part of computational
mathematics (just my opinion, as I know some consider this hard core AI).
Algorithms in areas like computer vision were completely new, but I could
see no relationship between those algorithms and algorithms in programs
called "expert systems", another hot AI topic.
[Agreed, but the gap is narrowing. There have been 1 or 2 dozen
good AI/vision dissertations, but the chief link has been that many
individuals and research departments interested in one area have
also been interested in the other. -- KIL]
As for expert systems, I could see no relationship between one expert system
and the next. An expert system seems to be a program that uses a lot of
problem-related hacks to usually come up with the right answer. Some of
the "knowledge representation" schemes (translated "data structures") are
nice, but everyone seems to use different ones. I have read several tech
reports describing recent expert systems, so I am not totally ignorant.
What is all the noise about? Why is so much money being waved around?
There seems to be nothing more to expert systems than to other complicated
programs.
[My own somewhat heretical view is that the "expert system" title
legitimizes something that every complicated program has been found
to need: hackery. A rule-based system is sufficiently modular that
it can be hacked hundreds of times before it is so cumbersome
that the basic structures must be rewritten. It is software designed
to grow, as opposed to the crystalline gems of the "optimal X" paradigm.
The best expert systems, of course, also contain explanatory capabilities,
hierarchical inference, constrained natural language interfaces, knowledge
base consistency checkers, and other useful features. -- KIL]
I know that numerical analysis and compiler writing are well developed fields
because there is a standard way of thinking that is associated with each
area and because a non-expert can use tools provided by experts to perform
computation or write a parser without knowing how the tools work. In fact,
a good test of an area within computer science is whether there are tools
that a non-expert can use to do things that, ten years ago, only experts
could do. Is there anything like this in AI? Are there natural language
processors that will do what YACC does for parsing computer languages?
There seem to be a number of answers to me:
1) Because of my indoctrination at Cornell, I categorize much of the
important results of AI in other areas, thus discounting the achievements
of AI.
2) I am even more ignorant than I thought, and you will enlighten me.
3) Although what I have said describes other areas of AI pretty much, yours
is an exception.
4) Although what I have said describes past results of AI, major achievements
are just around the corner.
5) I am correct.
You may be saying to yourself, "Is this guy serious?" Well, sort of. In
any case, this should generate more interesting and useful information
than trying to define intelligence, so please treat me seriously.
Ralph Johnson
------------------------------
Date: Thu 17 Nov 83 16:57:55-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Conference Announcements and Call for Papers
[Reprinted from the SU-SCORE bboard.]
Image Technology 1984 37th annual conference May 20-24, 1984
Boston, Mass. Jim Clark, papers chairman
British Robot Association 7th annual conference 14-17, May 1984
Cambridge, England Conference director-B.R.A. 7,
British Robot Association, 28-30 High Street, Kempston, Bedford
MK427AJ, England
First International Conference on Computers and Applications
Beijing, China, June 20-22, 1984 co-sponsored by CIE computer society
and IEEE computer society
CMG XIV conference on computer evaluation--preliminary agenda
December 6-9, 1983 Crystal City, Va.
International Symposium on Symbolic and Algebraic Computation
EUROSAM 84 Cambridge, England July 9-11, 1984 call for papers
M. Mignotte, Centre de Calcul, Universite Louis Pasteur, 7 rue
rene Descartes, F67084 Strasvourg, France
ACM Computer Science Conference The Future of Computing
February 14-16, 1984 Philadelphia, Penn. Aaron Beller, Program
Chair, Computer and Information Science Department, Temple University
Philadelphia, Penn. 19122
HL
------------------------------
Date: Fri 18 Nov 83 04:00:10-CST
From: Werner Uhrig <CMP.WERNER@UTEXAS-20.ARPA>
Subject: ***** Call for Papers: LISP and Functional Programming *****
please help spread the word by announcing it on your local machines. thanks
---------------
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
() CALL FOR PAPERS ()
() 1984 ACM SYMPOSIUM ON ()
() LISP AND FUNCTIONAL PROGRAMMING ()
() UNIVERSITY OF TEXAS AT AUSTIN, AUGUST 5-8, 1984 ()
() (Sponsored by the ASSOCIATION FOR COMPUTING MACHINERY) ()
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
This is the third in a series of biennial conferences on the LISP language and
issues related to applicative languages. Especially welcome are papers
addressing implementation problems and programming environments. Areas of
interest include (but are not restricted to) systems, large implementations,
programming environments and support tools, architectures, microcode and
hardware implementations, significant language extensions, unusual applications
of LISP, program transformations, compilers for applicative languages, lazy
evaluation, functional programming, logic programming, combinators, FP, APL,
PROLOG, and other languages of a related nature.
Please send eleven (11) copies of a detailed summary (not a complete paper) to
the program chairman:
Guy L. Steele Jr.
Tartan Laboratories Incorporated
477 Melwood Avenue
Pittsburgh, Pennsylvania 15213
Submissions will be considered by each member of the program committee:
Robert Cartwright, Rice William L. Scherlis, Carnegie-Mellon
Jerome Chailloux, INRIA Dana Scott, Carnegie-Mellon
Daniel P. Friedman, Indiana Guy L. Steele Jr., Tartan Laboratories
Richard P. Gabriel, Stanford David Warren, Silogic Incorporated
Martin L. Griss, Hewlett-Packard John Williams, IBM
Peter Henderson, Stirling
Summaries should explain what is new and interesting about the work and what
has actually been accomplished. It is important to include specific findings
or results and specific comparisons with relevant previous work. The committee
will consider the appropriateness, clarity, originality, practicality,
significance, and overall quality of each summary. Time does not permit
consideration of complete papers or long summaries; a length of eight to twelve
double-spaced typed pages is strongly suggested.
February 6, 1984 is the deadline for the submission of summaries. Authors will
be notified of acceptance or rejection by March 12, 1984. The accepted papers
must be typed on special forms and received by the program chairman at the
address above by May 14, 1984. Authors of accepted papers will be asked to
sign ACM copyright forms.
Proceedings will be distributed at the symposium and will later be available
from ACM.
Local Arrangements Chairman General Chairman
Edward A. Schneider Robert S. Boyer
Burroughs Corporation University of Texas at Austin
Austin Research Center Institute for Computing Science
12201 Technology Blvd. 2100 Main Building
Austin, Texas 78727 Austin, Texas 78712
(512) 258-2495 (512) 471-1901
CL.SCHNEIDER@UTEXAS-20.ARPA CL.BOYER@UTEXAS-20.ARPA
------------------------------
End of AIList Digest
********************
∂21-Nov-83 0222 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #53
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Nov 83 02:22:04 PST
Date: Sunday, November 20, 1983 5:52PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #53
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Monday, 21 Nov 1983 Volume 1 : Issue 53
Today's Topics:
Impelementations - An Algorithmic Capability &
Concurrency & Uncertainties & Search
LP Library - Update
----------------------------------------------------------------------
Date: 17 Nov 83 22:28:07-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Rule-Based Algorithm - (nf)
I'm not sure what the algorithm-izers are looking for; they seem
to know more about what they don't want (no criticism intended,
one has to start somewhere). What's people's opinion of systems
like Prolog/KR (Japanese effort, published in "New Generation
Computing", first issue); or of QUTE (IJCAI 83), which merges
Prolog and Lisp (IJCAI 83 also contains another - similar -
attempt) ? At this point I'm not convinced that what Russ Abbott
labels the "coat-tail" approach to Prolog algorithm is inherently
evil. Granted, though, that there may be better ways to do it (E.g.
come up with a more general formalism, one that includes an
"implementation" of 1st-order logic as a subset, as the IJCAI 83
papers try to do). As far as I know the Edinburgh Prolog interpreters
were designed to activate left-to-right just so that algorithms could
be coded reasonably efficiently. But perhaps that ought to be viewed
as a cop-out, I.e. as an admission of the lack of a better idea ?
In my opinion the "coat-tail" approach actually has a very strong
point in its favour, namely that at least the procedures/algorithms
are RULE-BASED. This against Richard O'Keefe: Dykstra, Wirth and
Hoare do not necessarily have the last word in algorithmic
programming. I would very much rather write small chunks of impure
Prolog than have to wade through Pascal looking for a matching end
that closes a block having 10 nested loops ! Of course, Prolog
doesn't have a patent on rule-based computing, and I would be
even happier with a language that is more algorithmic AND
declarative AND rule-based. So perhaps I should be using some novel
form of production system.
As it happens, that's exactly what I am doing -- designing a language
that borrows from both Prolog and production systems, to see if their
marriage can solve some problems in both. At this point I'm receiving
some very negative comments from some referees who consider that,
even though there is a uniform logical basis to my language
(admittedly equivalence rather than implication), and though it
contains Prolog as a subset, it also contains too much provision
for procedure to be given the respect that only real logic deserves
...
Interestingly enough, the Japanese seem to have adopted a different
stance on that. Though (obviously) not everything they do is very
aesthetic, they are manifesting (as I see it) a strong tendency
toward pragmatics and (oh dear) "user-convenience". Perhaps too much
of "whatever works". At which point the quote from Wirth on language
simplicity (often consonant with logical purity) is applicable. The
only way to please everybody is to come up with a pure formalism
capable of supporting both logic interpretation and algorithms.
Otherwise your paper will be rejected by approximately half its
referees, and you'll never be seen in print except in Japan !
Perhaps -- perhaps not -- we will eventually come to admit that too
many ingredients spoil the language, and will settle for a few less
general languages whose programs communicate by Unix pipes ? Until
then I tend to sympathise with the Japanese and their search for a
programming tool that supports algorithms, pattern-matching, back
tracking AND rule-based knowledge representation.
-- Marcel Schoppers
of Illinois @ Urbana-Champaign
------------------------------
Date: Tue, 15 Nov 83 17:26:55 PST
From: Zauderer@PARC
Subject: Interest in Concurrent Prolog
Greetings, Prolog devotees ! I've recently returned from the
Weizmann Institute of Science, where I spent my summer working with
Ehud Shapiro (of Concurrent Prolog fame). I'm a Berkeley student,
but am taking the semester off to work at Xerox PARC. I'm interested
in maintaining contact with Prolog-related things -- specifically,
with Concurrent Prolog. I'm trying to set up some informal lines of
communication between CP users/potential users, in order to make the
following items available for consumption:
- Opinions about Concurrent Prolog, E.g. reviews of the
language, ideas, bugs, etc.
- Descriptions of CP projects/studies (if any!) currently
underway.
- Expressions of interest in finding out/finding out more about
CP.
- Related information, E.g. object-oriented programming and
parallel processing concepts using Prolog/CP, 5th generation
stuff, articles/reports worth reading concerning CP concepts
techniques.
Thanks,
-- Marvin Zauderer
Zauderer@PARC
------------------------------
Date: Tuesday, 15-Nov-83 00:31:14-GMT
From: O'Keefe HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: Uncertainties, Breadth-First Search
I am currently writing a paper which extends Udi Shapiro's
Logic Programs with uncertainties. The mathematical basis
is finished, I am now writing up how you can interpret it.
If there's any interest I'll send it to {SU-SCORE} when I'm
done. The main points are
1) you can use any complete lattice with a least and
a greatest element as your certainty space
2) you can handle disjunction. It turns out that you
have a lot of freedom in defining the certainty
combination rule for disjunction, but you *must*
use the least upper bound for conjunction. (Since
Shapiro used numbers (0,1] the least upper bound is
always one of the two numbers.)
3) you can have "super-certain" rules, which attribute
greater certainty to their conclusion than to any of
their hypotheses. (You don't have to, but if you
want them they make sense.)
4) the theory lets you have a type tree (or several
type trees), I don't know how to interpret that
efficiently yet.
Shapiro's system can't be extended to handle general negation,
because it is based on the standard formal semantics for logic
programs, which associates a monotone map Tp with a logic program
P. In the presence of negation the map is no longer monotone.
You can write an interpreter for MYCIN's rules or PROSPECTOR's
rules in Prolog. (You can write an interpreter for Fortran if you
try hard enough.) The thing is, Prospector's pseudo-probabilities
are not supposed to be a logic, and don't obey all the laws that
you would expect a logic to follow, so you can't expect to gain
anything from Prolog's origin in logic. There is another big
difference between MYCIN and PROSPECTOR on one side and Shapiro's
formulation on the other: the former two handle uncertain
PROPOSITIONS, Shapiro's system handles uncertain first-order
atoms. Propositional calculus and Horn clauses are both special
cases of 1st-order PC, but they are simple in different ways.
If you try to handle negation in clauses you are going to have
trouble.
The problem with searching through formulae equivalent to a
given formula as a general one. The problem is that rules like
X+Y = Y+X
X = 0+X.
do not, in general terminate. You can handle the first sort by
using a fancy unification algorithm: there is a lot of published
on Unification Algorithms with special equational theories built
in. I have never understood it, but Lincoln Wallen here coded
up an AC-unification algorithm in Prolog. This still doesn't
cope with X=0+X. There is a lot known about rewrite rules,
look under Huet in your library catalogue to start with.
The way to do a breadth-first search in Prolog is to
program it up explicitly. You keep a queue of unexplored
alternatives.
bfs(Start, Soln) :-
bfs([Start|Qtail], Qtail, Soln).
bfs([], [], ←, ←) :- !, fail.
bfs([Soln|←], ←, Soln) :-
solution(Soln).
bfs([Node|Queue], Qtail, Soln) :-
sprout(Node, List),
append(List, NewTail, Qtail), !,
bfs(Queue, NewTail, Soln).
E.g. to search through expressions equivalent to a given
one, where you have a predicate
one←rewrite←step(Expr, Rewritten)
you would define
sprout(Expr, Nexprs) :-
setof(Nexpr, one←rewrite←step(Expr,Nexpr), Nexprs).
This code is a bit of a mess, and has not been tested. It is
meant as a suggestive sketch, that's all. To avoid generating
sorry, exploring, the same node more than once you can keep
the original [Start|Qtail] list and do
bfs(Start, Soln) :-
bfs([Start|Qtail], [Start|Qtail], Soln).
bfs([], ←, ←) :- !, fail.
bfs([Soln|←], ←, Soln) :-
solution(Soln).
bfs([Node|Rest], Explored, Soln) :-
sprout(Node, NewNodes),
join(NewNodes, Explored),
!,
bfs(Rest, Explored, Soln).
join([H|T], L) :-
member(H, L), !,
join(T, L).
join([], ←).
------------------------------
Date: Sun 20 Nov 83 17:17:54-PST
From: Chuck Restivo <Restivo@SU-SCORE>
Subject: LP Library Update
Three utilities have been added to the {SU-SCORE}PS:<Prolog>
directory. Thanks to the authors, Lawrence Byrd and Richard
O'Keefe.
SetUtl.Pl Purpose: set manipulation utilities
ReadIn.Pl Purpose: read in a sentence as a
list of words
Queues.Pl Purpose: define queue operations
If *you* have anything useful, send it in !
-ed
------------------------------
End of PROLOG Digest
********************
∂21-Nov-83 1021 @MIT-MC:Laws@SRI-AI AIList
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Nov 83 10:20:55 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 21 Nov 83 12:09-EST
Date: Mon 21 Nov 83 09:11:50-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AIList
To: Phil-Sci%MIT-OZ@MIT-MC.ARPA
Although several Phil-Sci members have generously given me permission
to reprint their messages, the approval was not unanimous. At least
one member feels that the possibility of being reprinted in AIList
would have and has had a negative effect on the Phil-Sci discussion.
I therefore apologize for the two or three paragraphs I reprinted in
the early days of AIList, and I withdraw my suggestion for a special
issue publicizing Phil-Sci. If any of you wish to contribute directly
to AIList@SRI-AI, please feel welcome.
-- Ken Laws
-------
∂21-Nov-83 1025 @MIT-MC:marcus@AEROSPACE Distribution list
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Nov 83 10:25:04 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 21 Nov 83 13:08-EST
Date: 21 November 1983 1006-PST (Monday)
From: marcus at AEROSPACE (Leo Marcus)
Subject: Distribution list
To: phil-sci%mit-oz at mit-mc.arpa
CC: marcus
Please remove me from the phil-sci list.
Thanks, Leo Marcus (MARCUS@AEROSPACE)
∂21-Nov-83 1119 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: AIList
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Nov 83 11:17:58 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 21 Nov 83 13:39-EST
Date: Mon 21 Nov 83 13:19:30-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: AIList
To: Laws@SRI-AI.ARPA
cc: Phil-Sci%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "Ken Laws <Laws@SRI-AI.ARPA>" of Mon 21 Nov 83 12:10:08-EST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AIList
Although several Phil-Sci members have generously given me permission
to reprint their messages, the approval was not unanimous....
and I withdraw my suggestion for a special issue publicizing Phil-Sci.
I regret that the permission was not unanimous, and thank you for your efforts
to bring the discussion to a wider audience. Would it be possible to consider
including only those contributors who did give permission? (If any contributor
still feels that this would be inappropriate, I withdraw the suggestion.)
If any of you wish to contribute directly
to AIList@SRI-AI, please feel welcome.
Please feel welcome to consider at least that portion of the discussion
which I authored as having been directly contributed to AIList.
-=*=- rick
-------
∂21-Nov-83 1154 KJB@SRI-AI.ARPA Fujimura'a visit
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Nov 83 11:53:41 PST
Date: Mon 21 Nov 83 11:53:59-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Fujimura'a visit
To: csli-folks@SRI-AI.ARPA
Fujimura, from Bell Labs, just called and will be in town Wednesday.
He would like to see our "lab". He will be here in the afternoon.
It would be good if he could talk with lots of people then, and
get a sense of what is going on. How about dropping by for tea that
day (or earlier) and chatting with him?
-------
∂21-Nov-83 1311 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: AIList
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Nov 83 13:08:23 PST
Date: Mon 21 Nov 83 16:04:00-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: AIList
To: phil-sci%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "RICKL%MIT-OZ@MIT-MC.ARPA" of Mon 21 Nov 83 14:09:07-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: AIList
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AIList
Although several Phil-Sci members have generously given me permission
to reprint their messages, the approval was not unanimous....
and I withdraw my suggestion for a special issue publicizing Phil-Sci.
....Would it be possible to consider
including only those contributors who did give permission? (If any contributor
still feels that this would be inappropriate, I withdraw the suggestion.)
At least one person (who did not respond to Ken) has written to me suggesting
that this would be inappropriate, and I therefore withdraw my proposal.
-=*=- rick
-------
∂21-Nov-83 1311 ALMOG@SRI-AI.ARPA reminder on why context wont go away
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Nov 83 13:10:07 PST
Date: 21 Nov 1983 1302-PST
From: Almog at SRI-AI
Subject: reminder on why context wont go away
To: csli-friends at SRI-AI
On Tuesday 11.22.83 we have our eight meeting. The speaker will be
Julius Moravcsik who will talk about a communication-model rather
than reference model for context dependent expressions.
Next week The speaker will be Peter Gardenfors who is visiting from
Sweden. His talk will be on providing an epistemic semantics that
explains the context dependence of conditionals. We have a slight
problem on that day, Sellars' Kant lectures begin at 4.15. I suggest
that we shall start earlier than usual, at 2.30. I will verify
tomorrow that this is ok with all of you.
I attach the abstract of J.Moravcsik's talk.
Indexicals: A communication model
Ventura Hall, 3.15pm, 11.22.83
The referential vs. communicational approaches to context dependent expressions
will be compared. The comparison will be given on four levels:
1. Semantic interpretation is pragmatics-laden. Comparison with
the interpretation of UNIVERSAL propositions.
2. Do indexicals refer to continuants or to stages?
3. What is asserted by an indexical sentence?
4. The interaction between speaker and hearer while using indexicals.
A communication-based picture will be defended over a reference based
picture.
-------
∂21-Nov-83 1315 STOLFI@SU-SCORE.ARPA Re: towards a more perfect department
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Nov 83 13:11:40 PST
Date: Mon 21 Nov 83 13:03:11-PST
From: Jorge Stolfi <STOLFI@SU-SCORE.ARPA>
Subject: Re: towards a more perfect department
To: bureaucrat@SU-SCORE.ARPA, students@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA,
secretaries@SU-SCORE.ARPA, research-associates@SU-SCORE.ARPA,
bureaucrat@SU-SCORE.ARPA
In-Reply-To: Message from "Student Bureaucrats <PATASHNIK@SU-SCORE.ARPA>" of Wed 16 Nov 83 17:28:28-PST
Yes, I am interested. Any day but wedensday and thursday (AFLB) would be ok.
jorge
-------
∂21-Nov-83 1521 @MIT-MC:RICKL%MIT-OZ@MIT-MC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Nov 83 15:20:39 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 21 Nov 83 17:41-EST
Date: Mon 21 Nov 83 17:27:20-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: limitations of logic
To: JMC@SU-AI.ARPA
cc: phil-sci%oz@MIT-MC, phw%MIT-OZ@MIT-MC.ARPA, dughof%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "John McCarthy <JMC@SU-AI>" of Thu 17 Nov 83 02:44:50-EST
Date: 16 Nov 83 2342 PST
From: John McCarthy <JMC@SU-AI>
Subject: limitations of logic
I will argue that lots of axiomatizations (note spelling) are consistent.
So far as I know, the statement that they are inconsistent is entirely
unsupported. I assert, however, that axiomatizations of common sense
domains will require non-monotonic reasoning to be strong enough, and
this may be confused with inconsistency by the naive. Domains of
scientific physics will not require non-monotonic reasoning, because
they aspire to a completeness not realizable with common sense domains.
Yes, lots of axiomatizations are consistent, but the existence of such
is not under debate. What is debated is "consistent axiomatization(s)
of expert knowledge of non-trivial domains". Below I will try to
support the claim.
In passing, note that in virtue of:
(a) Turing equivalence;
(b) the formal axiomatization of a Turing machine;
it follows that as much of science as is capturable by A.I. can also be
formally axiomatized. Since I believe that much of science is so
capturable, I am formally in complete agreement with you. Observe also
how this formal, axiomatic agreement obscures the real differences.
Hewitt, et. al., probably have a potentially useful intuition, but unless they
make the effort to make it as precise as possible, this potential will
not be realized.
I agree.
Of course, I didn't hear Hewitt's lecture,....
Carl announced at his talk that the gist of his remarks, in a somewhat
preliminary form, are in "Analyzing the Roles of Descriptions and
Actions in Open Systems", M.I.T. A.I. Memo 727, April 1983, and were also
presented at AAAI'83 and are in the proceedings.
-=*=- rick
================ longer message follows ================
PREAMBLE: The *attempt* to formalize is an essential and indispensable
component of science. To argue against the formal achievability of
consistent formal axiomatization is *not* to argue against the practical
utility of the attempt to achieve it. Logic is *not* useless.
Briefly, support for the claim:
(1) Empirical, as Carl suggested. In your reply you assert that the
claim is "entirely unsupported", but I notice that you stop short of
exhibiting a counter-example.
(2) The ubiquitious nature of real-world anomalies and exceptions.
Knowledge of the scientific facts of a domain must be counted as part of
the expert knowledge of a scientist. This includes knowledge of
anomalies and exceptions to theory. Any axiomatization which admits of
a known anomaly is inconsistent with what is known by experts to be
true, and so can hardly be called a consistent axiomatization of the
expert knowledge of that domain.
(3) The observed absence of an asymptotic convergence to a single
consistent axiom set.
Instead, science seems to progress by incremental refinement punctuated
by revolutionary revision, and shows no sign of doing otherwise. Thus,
even assuming that at any one point a consistent axiomatization of
scientific knowledge in a domain were to be produced, that axiomatization
would slowly become more and more inadequate until it was simply
discarded or superceded (with the same fate awaiting its successor).
In practical terms: this means that the surrounding theoretical
framework is likely to be replaced first, before consistency within that
framework can ever be achieved.
(4) The general failure of adjoining ceteris paribus clauses as a
strategy for achieving or maintaining scientific consistency.
Inconsistencies are patched by adjoining ceteris paribus clauses to the
axioms, but new anomalies always arise requiring new ceteris paribus
clauses. This I take to be the application to science of Carl's point
about Perpetual Inconsistency: if a particular inconsistency is patched
the resulting axiomatization will be inconsistent.
Non-monotonic reasoning is a structured mechanism for applying ceteris
paribus clauses, and it is not clear that it renders the resulting
system any more consistent than adjoining ceteris paribus clauses to a
scientific theory. For common-sense reasoning this is probably more
than adequate, however.
(5) The difficulty of *formally* separating "expert knowledge" from
"expert belief".
Any *inductive* law is only "believed with great strength". There is no
*formal* demarcation in the gradation of belief from "laws" to "accepted
wisdom". It is unlikely that any formal axiomatization which includes
the latter will be consistent, but unclear that it can be totally
excluded even in principle.
(footnote: this does *not* assert that "accepted wisdom" is acceptable
as the final goal of science. the fact of twilight does not mean that
day and night are indistinguishable. this does assert that there is not
a clearly defined separation of scientific knowledge and scientific
belief with respect to inductive laws.)
(6) The remarkable sparseness of attempts in science to do this at all.
(This may be more due to the fact that scientists really don't much care
about them in any practical sense.) Even the best-known attempt
(Newtonian mechanics) suffers from formal ambiguity. The second law may
be variously interpreted as a "law" (relating the primitive notions of
force, mass, and acceleration) or as a "definition" (of force, in terms
of the primitive notions of mass and acceleration).
(7) The frequency with which a new, initially less consistent and
successful theory displaces an older, more consistent one.
Consistency seems to be a poor measure of a theory. In particular,
consistency is not necessarily required in order for a theory to become
generally accepted. Galileo was never able to explain why the earth was
not subject to perpetual hurricane-force winds, if it really did move;
the prediction that oxygen and nitrogen should separate and settle out
in layers, if they really were a mixture instead of a compound, was an
embarassment to Dalton's theory for years. Examples like this abound.
(8) Inconsistencies in scientific knowledge itself.
For example, any attempt to formally axiomatize neurophysiology at this
point would be ridiculous, and also unhelpful. Which axiom set you
would get would depend not only on which scientist you asked, but also
on when you asked her. It is not at all clear that a plethora of
conflicting axiom sets is formally more consistent than a single
inconsistent one. Nor is it clear that individual scientists engaged in
active research always hold consistent beliefs about their domain.
For science to progress, indeed, it seems necessary that individual
scientists hold beliefs which do conflict with those of their
colleagues.
(9) It has not been found to be useful as a source of scientific
progress (recall the distinction between the axiomatization and the
attempt, above).
There is no important scientific discovery I am aware of which followed
as a direct result of the formal axiomatization of a scientific theory.
(Of course, "formal axiomatization" does *not* mean "application of
mathematical techniques to", but that much is obvious.)
One of several reasons EURISKO is fascinating is because it may provide
a counter-example. The 3-D VLSI discovery is impressive.
-=*=-
-------
∂21-Nov-83 1639 @MIT-MC:perlis%umcp-cs@CSNET-CIC Logic in a teacup
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Nov 83 16:39:26 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 21 Nov 83 19:33-EST
Date: 21 Nov 83 18:17:20 EST (Mon)
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Logic in a teacup
To: phil-sci%mit-oz@mit-mc
Via: UMCP-CS; 21 Nov 83 18:21-EST
1. A significant non-mathematical theory presumably is a
*naive* theory for surely any serious (scientific) theory is to
be consistent. So we aren't talking of theories within science
but rather theories held in a naive mode by cognitive agents,
as seen by we scientists. E.g., Hayes' naive physics studies
may well turn out to be (necessarily) inconsistent, for which
reason we need real physics if we want real consistency.
2. This may turn out to be nothing more than the frame
problem: the world is too complex to encapsulate in axioms
that are literally true and not merely defaults, except again
if we go to a *deep* level such as real physics which is
inappropriate for everyday reasoning. We need a teacup logic,
to use an example of Pat Hayes: a teacup rises when its saucer
is lifted--we can even try to make it very precise by saying
the cup must be on the saucer, the saucer must be strong, etc.
But still other possibilities may occur we hadn't thought of:
a string may be tied to the cup, etc. We need to speak of the
set of forces acting on the cup, to do a good job, and soon we
are in Physics 101, then 501, etc. So defaults really do
appear essential to a teacup logic.
3. Is teacup logic necessarily inconsistent? McCArthy says
no. But it depends on what you want. If one requiremrnt of a
significant system is that it be able to recognize that
deafults are just that--possibly in error--then observing a
teacup fall when its suacer is lifted should not simply replace
the 'teacup up' default with 'teacup down' but rather let them
both live so that the conflict can be seen. An instance of the
usefulness of this would be if our tea-drinking robot were to
remind itself of the *last* time it used that default and got
soaked as a consequence; or if it learns in the future to be
cautious in applying what it 'knows' in dangerous situations.
We need to let conflicts arise in our (robots') beliefs and
*then* resolve them with special methods for dealing with
inconsistent (naive) theories.
4. This does *not* mean embracing all sentences as true;
there's a big mis-conception of what a theorem is in logic. A
theorem in logic, in contradistinction from ordinary
mathematics, is not something that has been proven, but simply
something that *could* be proven, *if* the right steps were
taken. So the fact that all sentences of an inconsistent
theory are 'theorems' in the logic sense, does not mean they
are *proven* by the sytem in question. Indeed systems of logic
prove nothing at all; the whole issue of producing actual
proofs in some goal-oriented context is one that has to be
addressed as a further topic not determined by the choice of
logic. And so the issue is, what are 'the right steps' for a
significant naive theory to be coupled with?
5. So we need to be able to select our steps, and this is the
*control* issue that is at the heart of AI: what guides the
choices, the attention, of the robot? How does it so easily
turn from default to counterexample, and then to another
default about another matter, rather thatn be flooded all the
time with all its beliefs at once? This is *not* outside
logic. Any given proof that is actually *proven* in fact
constitutes just such a choice (and a hard one it can be, as
any logic student can testify). We need good models for memory
structures, feeding appropriate beliefs into our
proof-generator at the right times, as well as for the generator
itself (how many steps to take, etc).
6. A final word on meaning: Tarski can be viewed most
fruitfully as providing not a definition of *meaning* but of
the *different* *possible* meanings of a statement in different
contexts.You pick the context you want and then the meaning
comes with it. The meaning is then part of your naive theory,
not external to it. From the outside, of course one can only
see all the different possibilities, and indeed there is in
general no one *right* meaning out there. It is in *my* head
that 'this apple' means the one I have in mind; on the outside
people can speculate on just what I might have meant. Their
ability to get it 'right' (tho this is not a well-defined
notion, as Stitch and others would argue) suggests that we hold
similar naive theories in our heads, but doesn't as far as I
can see show that therer is a 'real' meaning and theory that we
somehow must divine in our cog sci efforts.
∂21-Nov-83 1856 GOLUB@SU-SCORE.ARPA CSD Chairperson Extraordinaire Required
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Nov 83 18:56:20 PST
Return-Path: <cheriton@diablo>
Received: from Navajo by SU-SCORE.ARPA with TCP; Thu 17 Nov 83 16:01:02-PST
Received: from Diablo by Navajo with TCP; Thu, 17 Nov 83 15:57:41 pst
Date: Thu, 17 Nov 83 15:57 PST
From: David Cheriton <cheriton@diablo>
Subject: CSD Chairperson Extraordinaire Required
To: su-bboards@Diablo
ReSent-date: Mon 21 Nov 83 18:55:23-PST
ReSent-from: Gene Golub <GOLUB@SU-SCORE.ARPA>
ReSent-to: faculty@SU-SCORE.ARPA
CSD is mounting a search (or whatever one does to start searching) for
someone to fill the position of Professor and Chairperson of Computer
Science. Ideally such a person should be able to walk on water, both
adminstratively and academically. We, the search committee, feel it
would be advantageous to identify particularly desirable candidates
for this position, in addition to openly soliciting applications.
Please mail suggestions of people that you feel we should seriously
consider to dek@su-ai.
∂21-Nov-83 2002 GOLUB@SU-SCORE.ARPA [Robert L. White <WHITE@SU-SIERRA.ARPA>: Space]
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Nov 83 20:02:24 PST
Date: Mon 21 Nov 83 20:00:55-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: [Robert L. White <WHITE@SU-SIERRA.ARPA>: Space]
To: faculty@SU-SCORE.ARPA
Does anyone have some good guestimate of the kind of space we will be
needing? The department has slightly over 30,000 square feet at its
disposal. GENE
---------------
Return-Path: <WHITE@SU-SIERRA.ARPA>
Received: from SU-SIERRA.ARPA by SU-SCORE.ARPA with TCP; Mon 21 Nov 83 12:09:14-PST
Date: Mon 21 Nov 83 12:09:07-PST
From: Robert L. White <WHITE@SU-SIERRA.ARPA>
Subject: Space
To: Golub@SU-SCORE.ARPA
cc: jparker@SU-SIERRA.ARPA
Plans for the Science and Engineering Quad have undergone a sudden aceler-
ation. Part of this plan is a joint EE/CSD building. To do a first cut
at building sizes, etc., we need to know how many sq ft CSD now occupies,
and what you see your needs to be 5 to 10 years down the road. Net sq ft
wll be most useful for planning, and it would be hughly desirable to have'some numbers by the middle of next week.
Hows that for a tall number?
Bob
-------
-------
∂21-Nov-83 2016 GOLUB@SU-SCORE.ARPA lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Nov 83 20:04:57 PST
Date: Mon 21 Nov 83 20:03:40-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: lunch
To: faculty@SU-SCORE.ARPA
There'll be a lunch on Tuesday as usual. I have nothing planned.
Mike Genesereth urged us to have more technical discussions. Does
anyone have some nice interesting talk they would like to give?
GENE
-------
∂21-Nov-83 2017 GOLUB@SU-SCORE.ARPA Disclosure form
Received: from SU-SCORE by SU-AI with TCP/SMTP; 21 Nov 83 20:17:19 PST
Date: Mon 21 Nov 83 20:16:38-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Disclosure form
To: Academic-Council: ;
I have sent a DISCLOSURE FORM to each of you. There is some confusion
about this form. You can complete it, indicating the consulting you have
done this past year. Alternatively, you can indicate to me your current
consulting . Indicate the companies for which you are consulting as well
as any corporations for which you are a director.
GENE
-------
∂21-Nov-83 2212 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 21 Nov 83 22:12:35 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 22 Nov 83 01:06-EST
Date: 21 Nov 83 23:24:19 EST (Mon)
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: limitations of logic
To: phil-sci%mit-oz@mit-mc
Via: UMCP-CS; 21 Nov 83 23:31-EST
From: RICKL%MIT-OZ%mit-mc.arpa@UDel-Relay
Subject: Re: limitations of logic
Consistency seems to be a poor measure of a theory. In
particular, consistency is not necessarily required in order
for a theory to become generally accepted. Galileo was never
able to explain why the earth was not subject to perpetual
hurricane-force winds, if it really did move; the prediction
that oxygen and nitrogen should separate and settle out in
layers, if they really were a mixture instead of a compound,
was an embarassment to Dalton's theory for years. Examples
like this abound.
It seems to me that you are talking not about scientific theories but
about the cluttered workbench of ideas that scientists deal with in
their efforts to devise suitable theories. No one would knowingly
present an inconsistent theory as such, without further explanantion as
to how the inconsistencies were to be viewed, i.e., without a way to
eliminate them. Thus Dirac excused the mathematical inconsistencies in
his famous "dirac function" by insisting that some underlying sense was
there rather than that Nature really was that way (inconsistent). So
his 'theory' was taken in that sense: not yet a full-fledged theory so
much as a hope for a theory. Later efforts clarified this. Similarly
Feynman insists that his rules for eliminating divergences work nicely
but totally obscure the issues and so are not satisfactory as a theory.
For example, any attempt to formally axiomatize neurophysiology
at this point would be ridiculous, and also unhelpful. Which
axiom set you would get would depend not only on which
scientist you asked, but also on when you asked her. It is not
at all clear that a plethora of conflicting axiom sets is
formally more consistent than a single inconsistent one. Nor
is it clear that individual scientists engaged in active
research always hold consistent beliefs about their domain.
For science to progress, indeed, it seems necessary that individual
scientists hold beliefs which do conflict with those of their
colleagues.
Yes, of course, but ditto my above comments. We simply have no 'theory'
of neurophysiology at present. We do have a *science* of that name, but
a science is a search for a theory, not a theory itself. The search
of course passes thru all sorts of confusions, and this is nothing to
be deplored. You are discussing the theory of how science works, and it
works by all sorts of inconsistencies, but this is not the same thing
as the theories that science is working *at*.
There is no important scientific discovery I am aware of which followed
as a direct result of the formal axiomatization of a scientific theory.
(Of course, "formal axiomatization" does *not* mean "application of
mathematical techniques to", but that much is obvious.)
I believe there was an infamous case in quantum field theory about 20
years back: an initially well-received theory was then found to be
inconsistent, and immediately abandoned as a result, and with great
embarrassment. (Who can tell me the names?)
--Don Perlis
Addendum:
I see now I must explain a remark in my own earlier message.
From: Don Perlis <perlis%umcp-cs%csnet-cic.arpa@UDel-Relay>
1. A significant non-mathematical theory presumably is a
*naive* theory for surely any serious (scientific) theory is to
be consistent. So we aren't talking of theories within science
but rather theories held in a naive mode by cognitive agents,
as seen by we scientists. E.g., Hayes' naive physics studies
may well turn out to be (necessarily) inconsistent, for which
reason we need real physics if we want real consistency.
Here I meant that since all scientific theories are intended to be
possible descriptions of reality, and as such consistent, then any
inconsistent ones worth their weight must be in another realm, namely
of hunches, workbenches, not yet freed of their grains of salt. Here
then we are talking of human practice, of cognition and reason, of
commonsense and AI, of the efforts to achieve scientific theories, not
of scientific theories themselves.
∂22-Nov-83 0156 NET-ORIGIN@MIT-MC Policy for Redistribution, Reproduction, and Republication of Messages
Received: from MIT-MC by SU-AI with TCP/SMTP; 22 Nov 83 01:56:40 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 22 Nov 83 04:53-EST
Received: from MIT-APIARY-5 by MIT-OZ via Chaosnet; 22 Nov 83 04:52-EST
Date: Tuesday, 22 November 1983, 04:55-EST
From: phil-sci-request@λSCRC|DMλ
Sender: JCMA%MIT-OZ@MIT-MC.ARPA
Subject: Policy for Redistribution, Reproduction, and Republication of Messages
To: phil-sci@λSCRC|DMλ
The maintainers of this list are not prepared to grant permission to
redistribute, reproduce, or republish any messages sent to PHIL-SCI because
they have no right to do so.
Messages to PHIL-SCI, or any other mailing-list, are implicitly copyrighted by
the authors. Consequently, permission to redistribute messages can only be
granted by the authors. Anyone wishing to reproduce, redistribute, or publish
messages sent to phil-sci should contact the authors for permission.
Note that this does not apply to PHIL-SCI bboards and redistribution lists
because they constitute part of the PHIL-SCI distribution mechanism.
∂22-Nov-83 0742 PATASHNIK@SU-SCORE.ARPA informal departmental lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 Nov 83 07:37:31 PST
Date: Tue 22 Nov 83 07:32:33-PST
From: Student Bureaucrats <PATASHNIK@SU-SCORE.ARPA>
Subject: informal departmental lunch
To: su-bboards@SU-SCORE.ARPA, students@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA,
secretaries@SU-SCORE.ARPA, research-associates@SU-SCORE.ARPA
cc: bureaucrat@SU-SCORE.ARPA
Reply-To: bureaucrat@score
We got a lot of responses, so we've reserved room MJH 146 on Wednesday
Nov. 30 and Thursday Dec. 1 at 12:15pm for these lunches. Sorry to
those who have conflicts on these days, but it was the best we could
do. Hope to see you there.
--Oren and Yoni, bureaucrats
-------
∂22-Nov-83 1009 GOLUB@SU-SCORE.ARPA [Jeffrey D. Ullman <ULLMAN@SU-SCORE.ARPA>: Re: lunch]
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 Nov 83 10:05:41 PST
Date: Tue 22 Nov 83 10:04:43-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: [Jeffrey D. Ullman <ULLMAN@SU-SCORE.ARPA>: Re: lunch]
To: faculty@SU-SCORE.ARPA
As requested by Mike, there will be a technical discussion today. GENE
---------------
Mail-From: ULLMAN created at 22-Nov-83 09:10:11
Date: Tue 22 Nov 83 09:10:11-PST
From: Jeffrey D. Ullman <ULLMAN@SU-SCORE.ARPA>
Subject: Re: lunch
To: GOLUB@SU-SCORE.ARPA
In-Reply-To: Message from "Gene Golub <GOLUB@SU-SCORE.ARPA>" of Mon 21 Nov 83 20:03:56-PST
Yes, I'd like to talk about the "war" between people who think
logic programming is the way to go and those who talk about
rule-based, or "expert" systems. I think they are exactly the
same thing, with different syntactic sugar, and I'd like to
find out what the real differences are. Perhaps Mike could
lead such a discussion, but I'm sure there are many who would
like to join in.
-------
-------
∂22-Nov-83 1013 @MIT-MC:GAVAN%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 22 Nov 83 10:12:59 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 22 Nov 83 13:03-EST
Date: Tue, 22 Nov 1983 12:53 EST
Message-ID: <GAVAN.11969690380.BABYL@MIT-OZ>
From: GAVAN%MIT-OZ@MIT-MC.ARPA
To: Don Perlis <perlis%umcp-cs@CSNET-CIC.ARPA>
Cc: phil-sci%mit-oz@MIT-MC
Subject: limitations of logic
In-reply-to: Msg of 21 Nov 1983 23:24-EST from Don Perlis <perlis%umcp-cs at CSNet-Relay>
I can no longer resist flaming. Pardon me if I repeat anything
already said. I wanted to express my support for RICKL's position
with a brief comment but I'm afraid I might have rambled a bit.
From: Don Perlis <perlis%umcp-cs at CSNet-Relay>
. . . since all scientific theories are intended to be possible
descriptions of reality, and as such consistent, then any inconsistent
ones worth their weight must be in another realm, namely of hunches,
workbenches, not yet freed of their grains of salt.
What makes you so sure reality is consistent? Can you demonstrate its
consistency? In my view, there aren't any EMPIRICAL theories that
have been freed of their grains of salt or ceteris paribus conditions.
Empirical theories are abstractions off experience -- causally related
concepts. And since the particulars that fall under concepts appear
to be organized, as Wittgenstein noticed, like family resemblances
(they're only partially consistent replicas of the concept), some of
them will fail to conform to predictions made on the basis of
theories. So there's always some ceteris paribus conditions (censors,
in Minskyese). All we have are hunches and workbenches. We call them
theories. To demand that any theory, to be scientific, must be free
of all grains of salt is to say there are no scientific theories.
NORMATIVE theories, like formal logic and pure mathematics, are a
different story, of course. They may be consistent but they are
strictly apodictic. They aren't theories about reality at all.
Even a theory like "every event has some cause" (a theory you'd
probably need to believe if you also believed reality is consistent)
has its grain of salt. Such a theory is of course (formally) both
unverifiable and unfalsifiable (see Popper), but there's no adequate
explanation of how the chain of causes was started except perhaps by
some uncaused cause or spontaneously-acting prime mover. But if you
accept this, wouldn't you have to chuck the "every event has some
cause" theory? Whoops, there goes your consistency!
BTW, how does physics explain the origin of the universe? If by the
big-bang theory, where did the big bang come from? And if you know
what caused the big bang, what caused that? And what caused that . . .
Here then we are talking of human practice, of cognition and reason,
of commonsense and AI, of the efforts to achieve scientific theories,
not of scientific theories themselves.
So theories are like unicorns? We know what they are, but there
aren't any?
Yes, we're talking about human practice, and I think in the final
analysis scientific theorizing and common-sense reasoning will turn
out to be quite similar beasts. A "scientific theory" is just an
artefact of human practice. No more, no less. What appears (to us)
to be a "consistent" scientific theory today will undoubtedly require
some grains of salt tomorrow to protect it from those nasty falsifying
experiments.
Yes, I rambled.
∂22-Nov-83 1335 @MIT-MC:BERWICK%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 22 Nov 83 13:35:02 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 22 Nov 83 16:29-EST
Date: 22 Nov 1983 16:28 EST (Tue)
Message-ID: <BERWICK.11969729552.BABYL@MIT-OZ>
From: "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC.ARPA>
To: GAVAN%MIT-OZ@MIT-MC.ARPA
Cc: Don Perlis <perlis%umcp-cs@CSNET-CIC.ARPA>, phil-sci%mit-oz@MIT-MC
Subject: limitations of logic
In-reply-to: Msg of 22 Nov 1983 12:53-EST from GAVAN%MIT-OZ at MIT-MC.ARPA
I also can no longer resist replying. I'm with RICKL and GAVAN on
this one. I've had several discussions (months ago) with T. Kuhn on
the role of formalization in scientific theory formation. The
question: are there any examples where the formalization (read:
axiomatization) of a sub-field led to important scientific
discoveries? This doesn't mean post-hoc rational reconstruction, as
RICKL has stressed. Kuhn, an expert in the history of science,
couldn't think of any. There may be isolated examples, but so far,
not a single, documented example has been adduced. PERLIS's case
would be interesting, but is anecdotal. Of course, this doesn't imply
a limitation in principle; perhaps it's a reflection of shoddy human
methodology up to now. The point is that the burden of proof is with
those who would insist that such axiomatization is central to science,
or even to our models of science. So far, the weight of the evidence
is on the other side. In fact, I am willing, even eager, to be
convinced by people like PERLIS; it's just that, as the saying goes,
``wishing won't make it so.'' GAVAN rightly emphasized that PERLIS
was claiming what ought to be, not what is. I would be extremely
interested in even a single example of the kind imputed to exist, and
so would T. Kuhn.
Bob Berwick
∂22-Nov-83 1541 @MIT-MC:MONTALVO%MIT-OZ@MIT-MC reasoning about inconsistency
Received: from MIT-MC by SU-AI with TCP/SMTP; 22 Nov 83 15:40:25 PST
Date: Tue 22 Nov 83 18:37:06-EST
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: reasoning about inconsistency
To: KDF%MIT-OZ@MIT-MC.ARPA, jerryb%MIT-OZ@MIT-MC.ARPA,
phil-sci%MIT-OZ@MIT-MC.ARPA
cc: MONTALVO%MIT-OZ@MIT-MC.ARPA
In-Reply-To: Message from "KDF@MIT-OZ" of Thu 17 Nov 83 19:48:40-EST
Date: Thu, 17 Nov 1983 14:31 EST
From: JERRYB@MIT-OZ
Subject: [KDF at MIT-AI: limitations of logic]
From: KDF at MIT-OZ
In reality, contradictions are quite useful.
The Viewpoint mechanism in Omega solves this problem by placing
theories in viewpoints and allowing one to have a logical theory in
viewpoint A about the structure of the (possibly contradictory)
logical theory in viewpoint B. Thus reasoned analysis of logical
contradictions can be performed.
Date: Thu, 17 Nov 1983 19:40 EST
From: KDF@MIT-OZ
Subject: What to do until clarification comes
I'm sure the viewpoint mechanism in Omega is sufficiently powerful
to allow the kind of meta-reasoning that you allude to, but has anyone
actually done it? If so, how different are the details from the FOL
approach?
Yes, John Lamping has implemented such an example in FOL, the
MasterMind example in IJCAI-83. As far as I've been able to ferret
out, from talking to both Richard Weyhrauch and Carl Hewitt, the only
real difference between the viewpoint mechanism in Omega and the
context mechanism in FOL (which some people may think is a detail) is
that symbol names in Omega are global, whereas in FOL they are
relative to a context. This may have some consequence in an
application where you want to have the same symbol refer to two
different things depending on context.
There's also the issue of parallelism. Omega is built on Actors which
are inherently parallel, but I don't think this actually affects the
reasoning, or at least the explicit representation of the reasoning.
Richard apparently is working on a parallel extension to FOL but I
haven't seen it come out yet.
Fanya
-------
∂22-Nov-83 1724 LAWS@SRI-AI.ARPA AIList Digest V1 #102
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Nov 83 17:23:17 PST
Date: Tuesday, November 22, 1983 10:31AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #102
To: AIList@SRI-AI
AIList Digest Tuesday, 22 Nov 1983 Volume 1 : Issue 102
Today's Topics:
AI and Society - Expert Systems,
Scientific Method - Psychology,
Architectures - Need for Novelty,
AI - Response to Challenge
----------------------------------------------------------------------
Date: 20 Nov 83 14:50:23-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!simon @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: psuvax.357
It seems a little dangerous "to send machines where doctors won't go" -
you'll get the machines treating the poor, and human experts for the privileged
few.
Also, expert systems for economics and social science, to help us would be fine,
if there was a convincing argument that a)these social sciences are truly
helpful for coping with unpredictable technological change, and b) that there
is a sufficiently accepted basis of quantifiable knowledge to put in the
proposed systems.
janos simon
------------------------------
Date: Mon, 21 Nov 1983 15:24 EST
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: I recall Rational Psychology
Date: 17 Nov 83 13:50:54-PST (Thu)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: I recall Rational Psychology
... Proper scientific method is very
hard to apply in the face of stunning lack of understanding or hard,
testable theories. Most proper experiments are morally unacceptable in
the pschological arena. As it is, there are so many controls not done,
so many sources of artifact, so much use of statistics to try to ferret
out hoped-for correlations, so much unavoidable anthropomorphism. As with
scholars such as H. Dumpty, you can define "science" to mean what you like,
but I think most psychological work fails the test.
----GaryFostel----
You don't seem to be aware of Experimental Psychology, which involves
subjects' consent, proper controls, hypothesis formation and
evaluation, and statistical validation. Most of it involves sensory
processes and learning. The studies are very rigorous and must be so
to end up in the literature. You may be thinking of Clinical Psychology.
If so, please don't lump all of Psychology into the same group.
Fanya Montalvo
------------------------------
Date: 19 Nov 83 11:15:50-PST (Sat)
From: decvax!tektronix!ucbcad!notes @ Ucb-Vax
Subject: Re: parallelism vs. novel architecture - (nf)
Article-I.D.: ucbcad.835
Re: parallelism and fundamental discoveries
The stored-program concept (Von Neumann machine) was indeed a breakthrough
both in the sense of Turing (what is theoretically computable) and in the
sense of Von Neuman (what is a practical machine). It is noteworthy,
however, that I am typing this message using a text editor with a segment
of memory devoted to program, another segment devoted to data, and with an
understanding on the part of the operating system that if the editor were
to try to alter one of its own instructions, the operating system should
treat this as pathological, and abort it.
In other words, the vaunted power of being able to write data that can be
executed as a program is treated in the most stilted and circumspect manner
in the interests of practicality. It has been found to be impractical to
write programs that modify their own inner workings. Yet people do this to
their own consciousness all the time--in a largely unconscious way.
Turing-computability is perhaps a necessary condition for intelligence.
(That's been beaten to death here.) What is needed is a sufficient condition.
Can that possibly be a single breakthrough or innovation? There is no
question that, working from the agenda for AI that was so hubristically
layed out in the 50's and 60's, such a breakthrough is long overdue. Who
sees any intimation of it now?
Perhaps what is needed is a different kind of AI researcher. New ground
is hard to break, and harder still when the usual academic tendency is to
till old soil until it is exhausted. I find it interesting that many of
the new ideas in AI are coming from outside the U.S. AI establishment
(MIT, CMU, Stanford, mainly). Logic programming seems largely to be a
product of the English-speaking world *apart* from the U.S. Douglas
Hofstadter's ideas (though probably too optimistic) are at least a sign
that, after all these years, some people find the problem too important
to be left to the experts. Tally Ho! Maybe AI needs a nut with the
undaunted style of a Nicola Tesla.
Some important AI people say that Hofstadter's schemes can't work. This
makes me think of the story about the young 19th century physicist, whose
paper was reviewed and rejected as meaningless by 50 prominent physicists
of the time. The 51st was Maxwell, who had it published immediately.
Michael Turner (ucbvax!ucbesvax.turner)
------------------------------
Date: 20 November 1983 2359-PST (Sunday)
From: helly at AEROSPACE (John Helly)
Subject: Challenge
I am responding to Ralph Johnson's recent submittal concerning the
content and contribution of work in the field of AI. The following
comments should be evaluated in light of the fact that I am currently
developing an 'expert system' as a dissertation topic at UCLA.
My immediate reaction to Johnson's queries/criticisms of AI is that of
hearty agreement. Having read a great deal of AI literature, my
personal bias is that there is a great deal of rediscovery of Knuth in
the context of new applications. The only things apparently unique are
that each new 'discovery' carries with it a novel jargon with very
little attempt to connect and build on previous work in the field. This
reflects a broader concern I have with Computer Science in general in
that, having been previously trained as a biologist, I find very little
that I consider scientific in this field. This does not diminish my
hope for, and consequently my commitment to, work in this area.
Like many things, this commitment is based on my intuition (read faith)
that there really is something of value in this field. The only
rationale I can offer for such a commitment is the presumption that the
lack of progress in AI research is the result of the lack of scientific
discipline of AI researchers and computer scientists in general. The AI
community looks much more like a heterogeneous population of hackers than
that of a disciplined, scientific community. Maybe this is symptomatic
of a new field of science going through growing pains but I do not
personally believe this is the case. I am unaware of any similar
developmental process in the history of science.
This all sounds pretty negative, I know. I believe that criticism
should always be stated with some possible corrective action, though,
and maybe I have some. Computer science curricula should require formal
scientific training. Exposure to truly empirical sciences would serve
to familiarize students with the value of systematic research,
experimental design, hypothesis testing and the like. We should find
ways to apply the scientific method to our research rather than
collecting a lot of anecdotal information about our 'programming
environment' and 'heuristics' and publishing it at first light.
Maybe the computer science is basically an engineering discipline (i.e.,
application-oriented) rather than a science. I believe, however, that
in the least computer science, even if misnamed, offers powerful tools
for investigating human information processing (i.e, intelligence) if
approached scientifically. Properly applied these tools can provide the
same benefits they have offered physicists, biologists and medical
researchers - insight into mechanisms and techniques for simulating the
systems of interest.
Much of AI is very slick programming. I'm just not certain that it is
anything more than that, at least at present.
------------------------------
Date: Mon 21 Nov 83 14:12:35-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Reply to Ralph Johnson
Your recent msg to AILIST was certainly provocative, and I thought I'd
try to reply to a couple of the points you made. First, I'm a little
appalled at what you portray as the "Cornell" attitude towards AI. I
hope things will improve there in the future. Maybe I can contribute
a little by trying to persuade you that AI has substance.
I'd like to begin by calling attention to the criteria that you are
using to evaluate AI. I believe that if you applied these same
criteria to other areas of computer science, you would find them
lacking also. For example, you say that "While research in AI has
produced many results in side areas..., none of the past promises of
AI have been fulfilled." If we look at other fields of computer
science, we find similar difficulties. Computer science has promised
secure, reliable, user-friendly computing facilities, cheap and robust
distributed systems, integrated software tools. But what do we have?
Well, we have some terrific prototypes in research labs, but the rest
of the world is still struggling with miserable computing
environments, systems that constantly crash, and distributed systems
that end up being extremely expensive and unreliable.
The problem with this perspective is that it is not fair to judge a
research discipline by the success of its applications. In AI
research labs, AI has delivered on many of its early promises. We now
have machines with limited visual and manipulative capabilities. And
we do have systems that perform automatic language translation (e.g.,
at Texas).
Another difficulty of judging AI is that it is a "residual"
discipline. As Avron Barr wrote in the introduction to the AI
Handbook, "The realization that the detailed steps of almost all
intelligent human activity were unknown marked the beginning of
Artificial Intelligence as a separate part of computer science." AI
tackles the hardest application problems around: those problems whose
solution is not understood. The rest of computer science is primarily
concerned with finding optimum points along various solution
dimensions such as speed, memory requirements, user interface
facilities, etc. We already knew HOW to sort numbers before we had
computers. The role of Computer Science was to determine how to sort
them quickly and efficiently using a computer. But, we didn't know
HOW to understand language (at least not at a detailed level). AI's
task has been to find solutions to these kinds of problems.
Since AI has tackled the most difficult problems, it is not surprising
that it has had only moderate success so far. The bright side of
this, however, is that long after we have figured out whether P=NP, AI
will still be uncovering fascinating and difficult problems. That's
why I study it.
You are correct in saying that the AI literature is hard to read. I
think there are several reasons for this. First, there is a very
large amount of terminology to master in AI. Second, there are great
differences in methodology. There is no general agreement within the
AI community about what the hard problems are and how they should be
addressed (although I think this is changing). Good luck with any
further reading that you attempt.
Now let me address some of your specific observations about AI. You
say "I already knew a large percentage of the algorithms. I did not
even think of most of them as being AI algorithms." I would certainly
agree. I cite this as evidence that there is a unity to all parts of
computer science, including AI. You also say "An expert system seems
to be a program that uses a lot of problem-related hacks to usually
come up with the right answer." I think you have hit upon the key
lesson that AI learned in the seventies: The solution to many of the
problems we attack in AI lies NOT in the algorithms but in the
knowledge. That lesson reflects itself, not so much in differences in
code, but in differences in methodology. Expert systems are different
and important because they are built using a programming style that
emphasizes flexibility, transparency, and rapid prototyping over
efficiency. You say "There seems to be nothing more to expert systems
than to other complicated programs". I disagree completely. Expert
systems can be built, debugged, and maintained more cheaply than other
complicated programs. And hence, they can be targeted at applications
for which previous technology was barely adequate. Expert systems
(knowledge programming) techniques continue the revolution in
programming that was started with higher-level languages and furthered
by structured programming and object-oriented programming.
Your view of "knowledge representations" as being identical with data
structures reveals a fundamental misunderstanding of the knowledge vs.
algorithms point. Most AI programs employ very simple data structures
(e.g., record structures, graphs, trees). Why, I'll bet there's not a
single AI program that uses leftist-trees or binomial queues! But, it
is the WAY that these data structures are employed that counts. For
example, in many AI systems, we use record structures that we call
"schemas" or "frames" to represent domain concepts. This is
uninteresting. But what is interesting is that we have learned that
certain distinctions are critical, such as the distinction between a
subset of a set and an element of a set. Or the distinction between a
causal agent of a disease (e.g., a bacterium) and a feature that is
helpful in guiding diagnosis (e.g., whether or not the patient has
been hospitalized). Much of AI is engaged in finding and cataloging
these distinctions and demonstrating their value in simplifying the
construction of expert systems.
In your message, you gave five possible answers that you expected to
receive. I guess mine doesn't fit any of your categories. I think
you have been quite perceptive in your analysis of AI. But you are
still looking at AI from the "algorithm" point of view. If you shift
to the "knowledge" perspective, your criteria for evaluating AI will
shift as well, and I think you will find the field to be much more
interesting.
--Tom Dietterich
------------------------------
Date: 22 Nov 83 11:45:30 EST (Tue)
From: rej@Cornell (Ralph Johnson)
Subject: Clarifying my "AI Challange"
I am sorry to create the mistaken impression that I don't think AI should
be done or is worth the money we spend on it. The side effects alone are
worth much more than has been spent. I do understand the effects of AI on
other areas of CS. Even though going to the moon brought no direct benefit
to the US outside of prestige (which, by the way, was enormous), we learned
a lot that was very worthwhile. Planetary scientists point out that we
would have learned a lot more if we had spent the money directly on planetary
exploration, but the moon race captured the hearts of the public and allowed
the money to be spent on space instead of bombs. In a similar way, AI
provides a common area for some of our brightest people to tackle very hard
problems, and consequently learn a great deal. My question, though, is
whether AI is really going to change the world any more than the rest of
computer science is already doing. Are the great promises of AI going to
be fulfilled?
I am thankful for the comments on expert systems. Following these lines of
reasoning, expert systems are differentiated from other programs more by the
programming methodology used than by algorithms or data structures. It is
very helpful to have these distinctions pointed out, and has made several
ideas clearer to me.
The ideas in AI are not really any more difficult than those in other areas
of CS, they are just more poorly explained. Several times I have run in to
someone who can explain well the work that he/she has been doing, and each
time I understand what they are doing. Consequently, I believe that the
reason that I see few descriptions of how systems work is because the
designers are not sure how they work, or they do not know what is important
in explaining how they work, or they do not know that it is important to
explain how they work. Are they, in fact, describing how they work, and I
just don't notice? What I would like is more examples of systems that work,
descriptions of how they work, and of how well they work.
Ralph Johnson (rej@cornell, cornell!rej)
------------------------------
Date: Tue 22 Nov 83 09:25:52-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Clarifying my "AI Challange"
Ralph,
I can think of a couple of reasons why articles describing Expert
Systems are difficult to follow. First, these programs are often
immense. It would take a book to describe all of the system and how
it works. Hence, AI authors try to pick out a few key things that
they think were essential in getting the system to work. It is kind
of like reading descriptions of operating systems. Second, the lesson
that knowledge is more important than algorithm has still not been
totally accepted within AI. Many people tend to describe their
systems by describing the architecture (ie., the algorithms and data
structures) instead of the knowledge. The result is that the reader
is left saying "Yes, of course I understand how backward chaining (or
an agenda system) works, but I still don't understand how it diagnoses
soybean diseases..." The HEARSAY people are particularly guilty of
this. Also, Lenat's dissertation includes much more discussion of
architecture than of knowledge. It often takes many years before
someone publishes a good analysis of the structure of the knowledge
underlying the expert performance of the system. A good example is
Bill Clancey's work analyzing the MYCIN system. See his most recent
AI Journal paper.
--Tom
------------------------------
End of AIList Digest
********************
∂22-Nov-83 2118 @MIT-MC:KDF%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 22 Nov 83 21:18:45 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 23 Nov 83 00:16-EST
Date: Wed, 23 Nov 1983 00:15 EST
Message-ID: <KDF.11969814634.BABYL@MIT-OZ>
From: KDF%MIT-OZ@MIT-MC.ARPA
To: "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC.ARPA>
Cc: GAVAN%MIT-OZ@MIT-MC.ARPA, Don Perlis <perlis%umcp-cs@CSNET-CIC.ARPA>,
phil-sci%mit-oz@MIT-MC
Subject: limitations of logic
In-reply-to: Msg of 22 Nov 1983 16:28 EST (Tue) from "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC.ARPA>
If the question has become "does the explicit use of logic
(pick your favorite) lead to making new discoveries, or has it ever
done so", I would argue, as I have in the past, NO. But the original
question was different - "does logic have any place in representing
knowledge, especially when viewed with an eye towards computation?" -
with Carl taking the position of NO. It should be clear that a
negative answer to the first question says very little, if anything at
all, about the second.
∂23-Nov-83 0229 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #54
Received: from SU-SCORE by SU-AI with TCP/SMTP; 23 Nov 83 02:28:57 PST
Date: Tuesday, November 22, 1983 9:04PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #54
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Wednesday, 23 Nov 1983 Volume 1 : Issue 54
Today's Topics:
Announcement - '84 International Symposium on Logic Programming
----------------------------------------------------------------------
Date: 22 Nov 1983 13:06:13-EST (Tuesday)
From: Doug DeGroot <Degroot.YKTVMV.IBM@Rand-Relay>
Subject: Short Form
1984 International Symposium on Logic Programming
February 6-9, 1984
Atlantic City, New Jersey
BALLY'S PARK PLACE CASINO
Sponsored by the IEEE Computer Society
Material Enclosed:
Conference Calendar
Conference Registration Form
Hotel Registration Form
Tutorial Description
Conference Program
Travel Notes
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Registration Form
Send your check and completed registration form to:
Registration - 1984 ISLP
Doug DeGroot, Program Chairman
IBM Thomas J. Watson Research Center
P.O. Box 218
Yorktown Heights, NY 10598
Name: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Company/School: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Address:←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Telephone: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
STATUS Conference Tutorial
Member, IEEE ←← $155 ←← $110
Non-member ←← $180 ←← $125
IEEE COMPUTER SOCIETY MEMBERSHIP NO.: ←←←←←←←←
Late registration - add $15 (if after 1/30/84)
Make check payable to:
1984 Int'l Symposium on Logic Programming
Warning:
Don't forget to send in your Hotel Registration Form as
well.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Hotel Registration Information
The Symposium will take place in Bally's Park Place Casino.
A large number of rooms have been reserved there. However,
if they run out, additional rooms have been reserved at the
Claridge and Sands hotels. Both are close to Bally's. I
suggest you ask for a room at Bally's and indicate a second
choice on your registration form.
An official room registration form follows. Please fill it
in and send it with the required deposit to Bally's
immediately.
Rooms are limited, so return your registration soon.
Bally's Park Place Casino Hotel
Park Place and the Boardwalk
Atlantic City, New Jersey 08401
phone: (800) 772-7777
Make check payable to:
Bally's Park Place Casino Hotel
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Hotel Registration Form
1984 Int'l Symposium on Logic Programming
Bally's is the site of the conference itself. The Claridge
and Sands also have a number of rooms reserved for us. All
three offer the basic rate of $52.00 for a single or double.
If Bally's becomes full, your registration form will be
forwarded to your hotel of second choice. You may stay in
one of these other hotels by simply not checking any of the
Bally's slots.
Name: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Address:←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Date of arrival: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Date of departure: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Telephone: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
CODE: LOGIC,ACLP
Primary Choice
←← Bally's single (one person, one bed) $52.00
←← Bally's double (two persons, one bed) $52.00
←← Bally's double/double (two persons, two beds) $52.00
←← Bally's one-bedroom suite $104.00
←← Bally's two-bedroom suite $156.00
←← Bally's triple $67.00
←← Bally's quadruple $82.00
Secondary Choice - Claridge or Sands
←← Claridge single (one person, one bed) $52.00
←← Claridge double (two persons, one bed) $52.00
←← Claridge double/double (two persons, two beds) $52.00
←← Claridge one-bedroom suite $104.00
←← Claridge two-bedroom suite $156.00
←← Claridge triple $62.00
←← Claridge quadruple $72.00
←← Sands single (one person, one bed) $52.00
←← Sands double (two persons, one bed) $52.00
←← Sands double/double (two persons, two beds) $52.00
←← Sands triple $62.00
←← Sands quadruple $72.00
Reservations must be received by Jan 17, 1984.
Send in both pages of the hotel registration form.
Cancellations must be made 48 hours in advance to receive
deposit.
Send completed hotel registration forms to:
Bally's Park Place Casino Hotel
Park Place and the Boardwalk
Atlantic City, New Jersey 08401
(800) 772-7777
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Travel Notes
Atlantic City, New Jersey is about a 3-4 hour drive from
either the LaGuardia or JFK airports (but closer to JFK).
It is a 1-1/2 hour drive from the Newark, New Jersey
airport. It is about a 1 hour drive from the Philadelphia
airport. You may want to rent a car. If so, check with Hertz
for discount rates for the conference.
If you want to fly straight into the Atlantic City airport,
ask the person booking your flight to make Atlantic City
your final destination. Tell them to use the code AL-AIY as
the code for your final destination. This will book your
final leg of the journey on an Alleghany Airline shuttle.
If they cannot, ask for the code AIY. In this way, your
overall flight costs should be greatly reduced. (The
Alleghany shuttle is available from Washington, Newark, JFK,
LaGuardia, and Philadelphia.)
From the Atlantic City airport, you can take a 5-minute taxi
to the hotel.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Conference Overview
Opening Address:
Prof. J.A. (Alan) Robinson
Syracuse University
Guest Speaker:
Prof. Alain Colmerauer
Univeristy of Aix-Marseille II
Marseille, France
Keynote Speaker:
Dr. Ralph E. Gomory,
IBM Vice President & Director of Research,
IBM Thomas J. Watson Research Center
Tutorial: An Introduction to Prolog
Ken Bowen, Syracuse University
Entertainment
- Three Cocktail Parties
- Banquet
- Casino Entertainment Show
Presentations
35 Papers, 11 Sessions (11 Countries, 4 Continents)
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Conference Calendar
February 6-9, 1984
Monday, February 6
- Late Registration for Conference - all day
- Tutorial - An Introduction to Logic Programming
9:00 a.m. - 4:30 p.m. (registration required)
- Cocktail Party - 7:00 - 8:00 p.m.
Tuesday, February 7
- Conference Sessions - 9:00 a.m. - 6:00 p.m.
- Cocktail Party - 7:00 p.m. - 8:00 p.m.
- Banquet - 8:00 p.m. - 10:00 p.m.
Wednesday, February 8
- Conference Sessions - 9:00 a.m. - 6:00 p.m.
- Cocktail Party - 7:00 p.m. - 8:00 p.m.
- Casino Entertainment Show - 10:00 p.m. - 12:00 p.m.
Thursday, February 9
- Conference Sessions - 9:00 a.m. - 6:00 p.m.
- Cash in your chips - 7:00 p.m. - ?
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Conference Program
(Preliminary)
Session 1: Architectures I
←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Parallel Prolog Using Stack Segments on Shared-memory
Multiprocessors
Peter Borgwardt (Univ. Minn)
2. Executing Distributed Prolog Programs on a Broadcast
Network
David Scott Warren (SUNY Stony Brook, NY)
3. AND Parallel Prolog in Divided Assertion Set
Hiroshi Nakagawa (Yokohama Nat'l Univ, Japan)
4. Towards a Pipelined Prolog Processor
Evan Tick (Stanford Univ,CA) and David Warren
Session 2: Architectures II
←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Implementing Parallel Prolog on a Multiprocessor Machine
Naoyuki Tamura and Yukio Kaneda (Kobe Univ, Japan)
2. Control of Activities in the OR-Parallel Token Machine
Andrzej Ciepielewski and Seif Haridi (Royal Inst. of
Tech, Sweden)
3. Logic Programming Using Parallel Associative Operations
Steve Taylor, Andy Lowry, Gerald Maguire, Jr., and Sal
Stolfo (Columbia Univ,NY)
Session 3: Parallel Language Issues
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Negation as Failure and Parallelism
Tom Khabaza (Univ. of Sussex, England)
2. A Note on Systems Programming in Concurrent Prolog
David Gelertner (Yale Univ,CT)
3. Fair, Biased, and Self-Balancing Merge Operators in
Concurrent Prolog
Ehud Shaipro (Weizmann Inst. of Tech, Israel)
Session 4: Applications in Prolog
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Editing First-Order Proofs: Programmed Rules vs. Derived
Rules
Maria Aponte, Jose Fernandez, and Phillipe Roussel (Simon
Bolivar Univ, Venezuela)
2. Implementing Parallel Algorithms in Concurrent Prolog:
The MAXFLOW Experience
Lisa Hellerstein (MIT,MA) and Ehud Shapiro (Weizmann
Inst. of Tech, Israel)
Session 5: Knowledge Representation and Data Bases
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. A Knowledge Assimilation Method for Logic Databases
T. Miyachi, S. Kunifuji, H. Kitakami, K. Furukawa, A.
Takeuchi, and H. Yokota (ICOT, Japan)
2. Knowledge Representation in Prolog/KR
Hideyuki Nakashima (Electrotechnical Laboratory, Japan)
3. A Methodology for Implementation of a Knowledge
Acquisition System
H. Kitakami, S. Kunifuji, T. Miyachi, and K. Furukawa
(ICOT, Japan)
Session 6: Logic Programming plus Functional Programming - I
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. FUNLOG = Functions + Logic: A Computational Model
Integrating Functional and Logical Programming
P.A. Subrahmanyam and J.-H. You (Univ of Utah)
2. On Implementing Prolog in Functional Programming
Mats Carlsson (Uppsala Univ, Sweden)
3. On the Integration of Logic Programming and Functional
Programming
R. Barbuti, M. Bellia, G. Levi, and M. Martelli (Univ. of
Pisa and CNUCE-CNR, Italy)
Session 7: Logic Programming plus Functional Programming- II
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Stream-Based Execution of Logic Programs
Gary Lindstrom and Prakash Panangaden (Univ of Utah)
2. Logic Programming on an FFP Machine
Bruce Smith (Univ. of North Carolina at Chapel Hill)
3. Transformation of Logic Programs into Functional Programs
Uday S. Reddy (Univ of Utah)
Session 8: Logic Programming Implementation Issues
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Efficient Prolog Memory Management for Flexible Control
Strategies
David Scott Warren (SUNY at Stony Brook, NY)
2. Indexing Prolog Clauses via Superimposed Code Words and
Field Encoded Words
Michael J. Wise and David M.W. Powers, (Univ of New South
Wales, Australia)
3. A Prolog Technology Theorem Prover
Mark E. Stickel, (SRI, CA)
Session 9: Grammars and Parsing
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. A Bottom-up Parser Based on Predicate Logic: A Survey of
the Formalism and Its Implementation Technique
K. Uehara, R. Ochitani, O. Kakusho, and J. Toyoda (Osaka
Univ, Japan)
2. Natural Language Semantics: A Logic Programming Approach
Antonio Porto and Miguel Filgueiras (Univ Nova de Lisboa,
Portugal)
3. Definite Clause Translation Grammars
Harvey Abramson, (Univ. of British Columbia, Canada)
Session 10: Aspects of Logic Programming Languages
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. A Primitive for the Control of Logic Programs
Kenneth M. Kahn (Uppsala Univ, Sweden)
2. LUCID-style Programming in Logic
Derek Brough (Imperial College, England) and Maarten H.
van Emden (Univ. of Waterloo, Canada)
3. Semantics of a Logic Programming Language with a
Reducibility Predicate
Hisao Tamaki (Ibaraki Univ, Japan)
4. Object-Oriented Programming in Prolog
Carlo Zaniolo (Bell Labs, New Jersey)
Session 11: Theory of Logic Programming
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. The Occur-check Problem in Prolog
David Plaisted (Univ of Illinois)
2. Stepwise Development of Operational and Denotational
Semantics for Prolog
Neil D. Jones (Datalogisk Inst, Denmark) and Alan Mycroft
(Edinburgh Univ, Scotland)
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
An Introduction to Prolog
A Tutorial by Dr. Ken Bowen
Outline of the Tutorial
- AN OVERVIEW OF PROLOG
- Facts, Databases, Queries, and Rules in Prolog
- Variables, Matching, and Unification
- Search Spaces and Program Execution
- Non-determinism and Control of Program Execution
- Natural Language Processing with Prolog
- Compiler Writing with Prolog
- An Overview of Available Prologs
Who Should Take the Tutorial
The tutorial is intended for both managers and programmers
interested in understanding the basics of logic programming
and especially the language Prolog. The course will focus on
direct applications of Prolog, such as natural language
processing and compiler writing, in order to show the power
of logic programming. Several different commercially
available Prologs will be discussed and compared.
About the Instructor
Dr. Ken Bowen is a member of the Logic Programming Research
Group at Syracuse University in New York, where he is also a
Professor in the School of Computer and Information
Sciences. He has authored many papers in the field of logic
and logic programming. He is considered to be an expert on
the Prolog programming language.
------------------------------
End of PROLOG Digest
********************
∂23-Nov-83 0553 @MIT-MC:HEWITT@MIT-XX limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 23 Nov 83 05:52:59 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 23 Nov 83 08:49-EST
Date: Wed, 23 Nov 1983 08:43 EST
Message-ID: <HEWITT.11969907065.BABYL@MIT-XX>
From: HEWITT@MIT-XX.ARPA
To: KDF%MIT-OZ@MIT-MC.ARPA
Cc: "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC.ARPA>,
GAVAN%MIT-OZ@MIT-MC.ARPA, Hewitt@MIT-XX.ARPA,
Don Perlis <perlis%umcp-cs@CSNET-CIC.ARPA>, phil-sci%mit-oz@MIT-MC.ARPA
Reply-to: Hewitt at MIT-XX
Subject: limitations of logic
In-reply-to: Msg of 23 Nov 1983 00:15-EST from KDF%MIT-OZ at MIT-MC.ARPA
Date: Wednesday, 23 November 1983 00:15-EST
From: KDF%MIT-OZ at MIT-MC.ARPA
To: Robert C. Berwick <BERWICK%MIT-OZ at MIT-MC.ARPA>
cc: GAVAN%MIT-OZ at MIT-MC.ARPA,
Don Perlis <perlis%umcp-cs at CSNET-CIC.ARPA>,
phil-sci%mit-oz at MIT-MC
Re: limitations of logic
If the question has become "does the explicit use of logic
(pick your favorite) lead to making new discoveries, or has it ever
done so", I would argue, as I have in the past, NO. But the original
question was different - "does logic have any place in representing
knowledge, especially when viewed with an eye towards computation?" -
with Carl taking the position of NO. It should be clear that a
negative answer to the first question says very little, if anything at
all, about the second.
Actually, the main point that I made was "There are fundamental
limitations to the use of logic as a programming language."
Mathematical logic has an important place in our Artificial
Intelligence armamentarium. But it's not the whole show.
Cheers,
Carl
∂23-Nov-83 0604 @MIT-MC:HEWITT@MIT-XX limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 23 Nov 83 06:04:15 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 23 Nov 83 09:01-EST
Date: Wed, 23 Nov 1983 08:50 EST
Message-ID: <HEWITT.11969908428.BABYL@MIT-XX>
From: HEWITT@MIT-XX.ARPA
To: "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC.ARPA>
Cc: GAVAN%MIT-OZ@MIT-MC.ARPA, Hewitt@MIT-XX.ARPA,
Don Perlis <perlis%umcp-cs@CSNET-CIC.ARPA>, phil-sci%mit-oz@MIT-MC.ARPA
Reply-to: Hewitt at MIT-XX
Subject: limitations of logic
In-reply-to: Msg of 22 Nov 1983 16:28-EST from Robert C. Berwick <BERWICK%MIT-OZ at MIT-MC.ARPA>
I believe that formalization played a large role in the formation
of our theories of computation. In particular formalization played
key roles in the development of Church's thesis and the transition
from primitive recursive to partial recursive functions.
Cheers,
Carl
∂23-Nov-83 0958 KJB@SRI-AI.ARPA Advisory Panel's Visit
Received: from SRI-AI by SU-AI with TCP/SMTP; 23 Nov 83 09:58:25 PST
Date: Wed 23 Nov 83 09:10:58-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Advisory Panel's Visit
To: csli-folks@SRI-AI.ARPA
Dear All,
The meeting with the Advisory Panel went very well. They will
each write a report and send it to me next week, after which I will be
able to summarize my impressions more fully. For now, let me give you
my first impressions.
The Panel was very sympathetic to what we are trying to do.
They came with various ideas about what that was, and some serious
apprehensions, but went away excited by the venture, as far as I could
tell. They gave us a lot of good ideas, and made us see some of our
problems (especially the problem of balance) much more accurately.
Some of you have already had messages from me reflecting advise
from the panel. Others will soon. However, there is one piece of
advise that we all need to take to heart.
Two members of the panel said that people at CSLI, and others
at Stanford, feel out of touch with a lot that is going on. There
seem to be three aspects to the problem.
1) There seems to be a feeling that a lot of decisions are
getting made behind closed doors. I don't think this is the case. I
don't want to call any more meeting than necessary, but I do want to
keep in touch. Let me know if you have specific worries. But notice
that I have delegated much responsibility to committees, and it is up
to the chairpeople of those committees to keep all of us posted on
their activities, and for us to bring our concerns to the appropriate
committee. Of course this will not work unless the committees work.
2) At the research level, this is a very big group, with more
going on than any one person can hope to follow. We all have to be
selective about what we attend, and what we don't.
3) However, to do this we need to be well informed. One
problem is that we often find out about a meeting only after the fact.
This causes resentment and a feeling of exclusion. This extends
outside our own group, to a feeling of exclusion on the part of
others, especially (I have heard) on the part of people in the phil,
linguistics and psych dept that are not part of CSLI.
So, let me reiterate the plea I made earlier that ALL TALKS
(i.e. all meeting with a prepared presentation) be announced over the
net and, if at all possible, in time to appear in the newsletter!
Similarly, all visits sponsored by CSLI should be announced. (I found
it very embarrassing not to know that Ken Church was here.)
Another problem is that people outside the area feel that
certain lines of research are excluded here (GB, e.g.). This makes the
question of visitors, colloquia speakers, postdocs, the Bell
connection, etc. very important. Please give Joan's committee (which
is responsible for this) any suggestions you can. It is something SDF
is VERY sensitive about. There is a separate line item in the budget
just for this sort of outreach activity.
There are other things that grew out of the meeting that need
to be discussed soon. One thing that clearly emerged was that the
financial problem is not really an A versus B problem, but rather that
A+B is going to have financial problems unless we bring in extra
money to support it. We will be having a meeting about this next
week.
Rod Burstall, the one member of the panel who was not able to
attend, is coming next week. I will compile a final report after his
visit, and after I get the letters from the rest of the Panel.
Jon
-------
∂23-Nov-83 0959 KJB@SRI-AI.ARPA Fujimura's visit
Received: from SRI-AI by SU-AI with TCP/SMTP; 23 Nov 83 09:58:55 PST
Date: Wed 23 Nov 83 09:27:39-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Fujimura's visit
To: csli-folks@SRI-AI.ARPA
Fujimura just called and will be here in about an hour! He will be at
Ventura this a.m. and will have lunch with Stanley at the Faculty club.
If you want to talk with him, give me a call or come by.
-------
∂23-Nov-83 1005 @MIT-MC:mclean@NRL-CSS perlis on tarski and meaning
Received: from MIT-MC by SU-AI with TCP/SMTP; 23 Nov 83 10:05:47 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 23 Nov 83 11:39-EST
From: John McLean <mclean@NRL-CSS>
Date: Wed, 23 Nov 83 11:33:54 EST
To: phil-sci at mit-mc
Subject: perlis on tarski and meaning
Cc: mclean at NRL-CSS
From Don Perlis (21 Nov)
A final word on meaning: Tarski can be viewed most
fruitfully as providing not a definition of *meaning* but of
the *different* *possible* meanings of a statement in different
contexts.You pick the context you want and then the meaning
comes with it. The meaning is then part of your naive theory,
not external to it. From the outside, of course one can only
see all the different possibilities, and indeed there is in
general no one *right* meaning out there. It is in *my* head
that 'this apple' means the one I have in mind; on the outside
people can speculate on just what I might have meant. Their
ability to get it 'right' (tho this is not a well-defined
notion, as Stitch and others would argue) suggests that we hold
similar naive theories in our heads, but doesn't as far as I
can see show that therer is a 'real' meaning and theory that we
somehow must divine in our cog sci efforts.
Although I agree with the last statement of this passage, I disagree with
about everything else. First of all, Tarski was no more concerned with
meaning than, say, Quine. Tarski was concerned with "truth" and those
semantic notions necessary for its definition, viz. "denotation" and
"satisfaction". One can approach meaning within this framework by saying
that the meaning of a term picks out its reference if it has one in a given
context and the meaning of a sentence is its truth conditions, but one can
hold on to Tarski's definition of "truth" and reject meanings completely. I
think Tarski probably did and certainly Quine does. The importance of the
distinction between "meaning" and "denotation" is (1) I can intend to refer
to something I in fact do not refer to (e. g., I may intend to refer to 3 by
"the smallest prime") while I can make no sense of intending to mean something
distinct from what I actually did mean, and (2) nonambiguous phrases can
denote different objects at different times.
Given this distinction, Perlis' point can be put as follows: if we adopt
the view that each of us determines reference by some naive theory, then our
theories are isomorphic up to behavioral distinguishability. However, this
does not imply that there is a theory of reference that must be captured by
cog sci since any theory of reference that makes the same sentences true
will be isomorphic up to this point and as Quine, Putnam, Rorty, and a host
of others are so fond of pointing out: there are many many such theories.
Hence, cog sci should not be concerned with how we determine reference but
only with constructing a machine whose referencing behavior is indistinguish-
able from ours. After all, that is all that we demand from our fellow humans.
John McLean
∂23-Nov-83 1006 @MIT-MC:DAM%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 23 Nov 83 10:06:11 PST
Date: Wed, 23 Nov 1983 11:39 EST
Message-ID: <DAM.11969939207.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: BERWICK%MIT-OZ@MIT-MC.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
Subject: limitations of logic
Date: Tuesday, 22 November 1983 16:28-EST
From: Robert C. Berwick <BERWICK%MIT-OZ at MIT-MC.ARPA>
Kuhn, an expert in the history of science, couldn't think of any
[examples where the formalization (read: axiomatization) of a
sub-field led to important scientific discoveries].
I would be very surprised if such an example were ever found.
However I think the issue of AXIOMATIZATIONS misses the point.
MOST discoveries in theoretical physics were based on MATHEMATICAL
MODELS (everything from the discovery of Neptune to the prediction
of the existence of positrons). The colloqial meaning of "formal" is
MATHEMATICAL not AXIOMATIC. Formal AXIOMS (well formed formulas) are
not even used by mathematicians (mathematicians STUDY wffs but they do not
USE them in there discussions or papers).
I hold that MATHEMATICS is important in the DEVELOPMENT
of science. The question then becomes "what is the relationship
between MATHEMATICS and FORMAL LOGIC"? It seems to me that mathematical
thinking (at least mathematical PROOF) is based on some unknown formal
logic with a Tarski-Style semantics. The reason for this is that
every precise ENGLISH statement about mathematical objects can be given
precise Tarskian truth conditions.
∂23-Nov-83 1008 KJB@SRI-AI.ARPA "Joan's committee"
Received: from SRI-AI by SU-AI with TCP/SMTP; 23 Nov 83 10:08:23 PST
Date: Wed 23 Nov 83 10:03:53-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: "Joan's committee"
To: csli-folks@SRI-AI.ARPA
Dear folks,
In my letter about the Advisory Panel's visit, I referred to Joan's
committee. Barbara and Betsy pointed out that while that committee
was on the preliminary assignment copy, it did not make it into the
final assignment version.
The official name of this committee is the Outreach Committee. Its
membership, on the first pass, was BRESNAN, Smith and Periera.
However, this leaves out area D and, besides, Brian is swamped, so
I would take volunteers, from which Joan and I would make a final
selection.
Sorry for the confusion.
Jon
-------
∂23-Nov-83 1052 @MIT-MC:JCMA%MIT-OZ@MIT-MC perlis on tarski and meaning
Received: from MIT-MC by SU-AI with TCP/SMTP; 23 Nov 83 10:52:41 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 23 Nov 83 13:47-EST
Received: from MIT-APIARY-8 by MIT-OZ via Chaosnet; 23 Nov 83 13:47-EST
Date: Wednesday, 23 November 1983, 13:50-EST
From: JCMA%MIT-OZ@MIT-MC.ARPA
Subject: perlis on tarski and meaning
To: mclean%NRL-CSS%MIT-OZ@MIT-MC.ARPA
Cc: phil-sci@MIT-MC
In-reply-to: The message of 23 Nov 83 11:33-EST from John McLean <mclean at NRL-CSS>
From: John McLean <mclean@NRL-CSS>
Date: Wed, 23 Nov 83 11:33:54 EST
Given this distinction, Perlis' point can be put as follows: if we adopt
the view that each of us determines reference by some naive theory, then our
theories are isomorphic up to behavioral distinguishability. However, this
does not imply that there is a theory of reference that must be captured by
cog sci since any theory of reference that makes the same sentences true
will be isomorphic up to this point and as Quine, Putnam, Rorty, and a host
of others are so fond of pointing out: there are many many such theories.
Hence, cog sci should not be concerned with how we determine reference but
only with constructing a machine whose referencing behavior is indistinguish-
able from ours.
Note that the capacity to produce a program requires an intensional theory of
the behavior to be evinced. Thus, while unimplemented theories can be
examined according to behavioral criteria, implemented theories can be
compared both behaviorally and intensionally. Where does this leave
behavioral cognitive scientists and meaning theorists?
∂23-Nov-83 1600 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 23 Nov 83 16:00:04 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 23 Nov 83 18:57-EST
Date: 23 Nov 83 12:38:09 EST (Wed)
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: limitations of logic
To: "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC>, GAVAN%MIT-OZ@MIT-MC
Cc: phil-sci%mit-oz@MIT-MC, Don Perlis <perlis%umcp-cs@CSNet-Relay>
Via: UMCP-CS; 23 Nov 83 17:52-EST
From: "Robert C. Berwick" <BERWICK%MIT-OZ@MIT-MC>
question: are there any examples where the formalization (read:
axiomatization) of a sub-field led to important scientific
discoveries? This doesn't mean post-hoc rational reconstruction, as
RICKL has stressed. Kuhn, an expert in the history of science,
couldn't think of any. There may be isolated examples, but so far,
not a single, documented example has been adduced.
From: KDF%MIT-OZ@MIT-MC
If the question has become "does the explicit use of logic
(pick your favorite) lead to making new discoveries, or has it ever
done so", I would argue, as I have in the past, NO.
I would argue that logic is used, crucially, in everyday science, at
all junctures of every sort. E.g., Einstein's hunt for a theory in
which covariance held led him along many paths, and one by one he
turned then down since they were internally inconsistent (he couldn't
get both the covariance and the other features he wanted) until he hit
on time as the 'grain of salt' that needed changing.
IN fact, I would argue that consistency is largely what separates
science from theology and mysticism. See Bronowski's "Magic, science,
and civilization" for an enjoyable discussion of this: a single world
view is what characterizes the goal of the scientific community, and
single means consistent, saying something and taking it seriously
rahter than allowing any old statements to wander in at whim. Of
course, often we have trouble doing this, and often hiunches lead us to
keep an inconsistency going for awhile, but the whole *point* of it is
to work at getting it to *be* consistent.
And as I said in my previous message, consistency is *not* a
will-o-the-wisp. COnsistent theories *do* arise, all the time. DNA
provides a consistent theory of molecular reproduction; classical
physics is a consistnet theory; Einstein's dynamics is a consistent
theory; even Freud's multiple versions of a psychoanalytic theory are
(one at a time) consistent. The latter are not very formal, but it is
easy to formalize them, which may attest to their emptiness.
Even the 'little' theorettes of everyday research, in which it is
conjectured that one element of some big picture affects some other,
consistency obtains. Who would hypothesize that element A produces
affect Q on element B (measurable as Q=blah in situation X) and yet
that it does not? Or, who would hypothesize several such A-Q-X-B
things, and on observing that together they conflict still maintains
them all and doesn't even regard it as a discovery of note that the
conflict is there?
Another example: recently physics has been buzzing with Bell's theorem
and so-called reality principles, in which quantum mechanics (QM) and
certain principles (RP) conflict on a highly technical, mathematical
basis. Immediately the search was on for an experiment to tell which
was right: it was of course supposed that not both could be right, for
that would be to have an inconsistency. Yet RP as well as QM were
eminently plausible things, as seen by the lights of trained
physicists. Which was to be discarded? *Only* the consistency issue
would even cause one to ask this question.
(To finish the story, QM has won out resoundingly. This does not mean
QM is right, or that its main concepts are utterly clear or that RP has
nothing further to tell us; they are not at all clear, and a whole new
picture may be needed.)
∂23-Nov-83 1720 KJB@SRI-AI.ARPA
Received: from SRI-AI by SU-AI with TCP/SMTP; 23 Nov 83 17:20:27 PST
Date: Wed 23 Nov 83 17:00:12-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
To: csli-folks@SRI-AI.ARPA
Happy Thanksgiving. I hope it is more restful than our typical Thursday!
Jon
-------
∂23-Nov-83 1729 DKANERVA@SRI-AI.ARPA Newsletter No. 10, November 24, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 23 Nov 83 17:28:59 PST
Date: Wed 23 Nov 83 16:55:04-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter No. 10, November 24, 1983
To: csli-friends@SRI-AI.ARPA
CSLI Newsletter
November 24, 1983 * * * Number 10
Today is the Thanksgiving holiday and there are no scheduled CSLI
activities, so this will be a short newsletter. We do want to pass on
what announcements we have, however, and next week we'll be in full
form again. Please remember to get your announcements or reports in
to CSLI-NEWSLETTER@SRI-AI by Wednesday noon. Happy Thanksgiving and
long weekend!
- Dianne Kanerva
* * * * * * *
CSLI ADVISORY PANEL VISIT
The meeting with Advisory Panel went very well. The members who
were here will each write a report and send it to me next week. The
Panel was very sympathetic to what we are trying to do. They came
with various ideas about what that was, and some serious
apprehensions, but went away excited by the venture, as far as I could
tell. They gave us a lot of good ideas and made us see some of our
problems much more accurately.
Rod Burstall, the one member of the panel who was not able to
attend, is coming next week, and I will compile a final report after
his visit and after I get the letters from the rest of the panel.
- Jon Barwise
I'd like to thank the CSLI staff for their very personal
contributions to the gracious atmosphere that prevailed during the
panel's visit--flowers, refreshments, and all. Almost everyone was
involved in every activity, but the following specific contributions
ought to be mentioned. Emma Pease was responsible for the overall
planning and organizing. Pat Wunderman organized the breakfasts, and
Leslie Batema and Sandy McConnel-Riggs provided the lunches and teas,
as well as the flowers for Ventura Hall. Bach-Hong Tran, Frances
Igoni, and Dianne Kanerva helped with these activities as needed.
- Betsy Macken
* * * * * * *
BUILDING SECURITY AT VENTURA HALL
Ventura Hall is locked at five o'clock each day, but many
activities extend past that time. If you are in the building late,
please check before you leave the conference room, reading room, or
the like, that the windows are closed and locked.
* * * * * * *
! Page 2
* * * * * * *
TINLUNCH SCHEDULE
TINLunch will be held on each Thursday at Ventura Hall on the
Stanford University campus as a part of CSLI activities. Copies of
TINLunch papers will be at SRI in EJ251 and at Stanford University in
Ventura Hall.
November 24 THANKSGIVING
December 1 Paul Martin
December 8 John McCarthy
* * * * * * *
CSLI COLLOQUIUM
Thursday, December 1, 4:15 p.m., Room G-19, Redwood Hall
"Selected Problems in Visible Language"
Charles Bigelow
Computer Science Department
Stanford University
* * * * * * *
SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
On Wednesday, November 23, Craig Smorynski of San Jose State
University spoke on "Self-Reference and Bi-Modal Logic." Some results
from the modal and bi-modal analysis of self-reference in arithmetic
were discussed, including work of Solovay, Carlson, and the speaker.
Coming Events:
SPEAKER: Professor J. E. Fenstad, University of Oslo
TIME: Wednesday, Nov. 30, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
TOPIC: Connections between work in reverse mathematics
and nonstandard analysis
* * * * * * *
REMINDER ON WHY CONTEXT WON'T GO AWAY
On Tuesday, November 29, the speaker will be Peter Gardenfors,
who is visiting from Sweden. His talk will be on providing an
epistemic semantics that explains the context dependence of
conditionals. We have a slight problem on that day, Sellars' Kant
lectures begin at 4:15. I suggest that we shall start earlier than
usual, at 2:30, but I will have to verify that time. We will meet in
Ventura Hall as usual.
- Joseph Almog
* * * * * * *
! Page 3
* * * * * * *
CALL FOR PAPERS
1984 ACM SYMPOSIUM ON LISP AND FUNCTIONAL PROGRAMMING
UNIVERSITY OF TEXAS AT AUSTIN, AUGUST 5-8, 1984
(Sponsored by the ASSOCIATION FOR COMPUTING MACHINERY)
This is the third in a series of biennial conferences on the LISP
language and issues related to applicative languages. Especially
welcome are papers addressing implementation problems and programming
environments. Areas of interest include (but are not restricted to)
systems, large implementations, programming environments and support
tools, architectures, microcode and hardware implementations,
significant language extensions, unusual applications of LISP, program
transformations, compilers for applicative languages, lazy evaluation,
functional programming, logic programming, combinators, FP, APL,
PROLOG, and other languages of a related nature.
Please send eleven (11) copies of a detailed summary (not a
complete paper) to the program chairman:
Guy L. Steele Jr.
Tartan Laboratories Incorporated
477 Melwood Avenue
Pittsburgh, Pennsylvania 15213
Summaries should explain what is new and interesting about the
work and what has actually been accomplished. It is important to
include specific findings or results and specific comparisons with
relevant previous work. The committee will consider the
appropriateness, clarity, originality, practicality, significance, and
overall quality of each summary. Time does not permit consideration
of complete papers or long summaries; a length of eight to twelve
double-spaced typed pages is strongly suggested.
February 6, 1984 is the deadline for the submission of summaries.
Authors will be notified of acceptance or rejection by March 12, 1984.
The accepted papers must be typed on special forms and received by the
program chairman at the address above by May 14, 1984. Authors of
accepted papers will be asked to sign ACM copyright forms.
Proceedings will be distributed at the symposium and will later be
available from ACM.
Local Arrangements Chairman General Chairman
Edward A. Schneider Robert S. Boyer
Burroughs Corporation University of Texas at Austin
Austin Research Center Institute for Computing Science
12201 Technology Blvd. 2100 Main Building
Austin, Texas 78727 Austin, Texas 78712
(512) 258-2495 (512) 471-1901
CL.SCHNEIDER@UTEXAS-20.ARPA CL.BOYER@UTEXAS-20.ARPA
* * * * * * *
-------
∂23-Nov-83 2032 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 23 Nov 83 20:31:29 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 23 Nov 83 23:28-EST
Received: From Csnet-Cic.arpa by UDel-Relay via smtp; 23 Nov 83 18:57 EST
Date: 23 Nov 83 12:00:55 EST (Wed)
From: Don Perlis <perlis%umcp-cs@csnet-cic.arpa>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: limitations of logic
To: GAVAN%MIT-OZ%mit-mc.arpa@udel-relay.arpa,
Don Perlis <perlis%umcp-cs%csnet-cic.arpa@udel-relay.arpa>
Cc: phil-sci%mit-oz%mit-mc.arpa@udel-relay.arpa
Via: UMCP-CS; 23 Nov 83 17:50-EST
From: GAVAN%MIT-OZ%mit-mc.arpa@UDel-Relay
What makes you so sure reality is consistent? Can you demonstrate its
consistency? In my view, there aren't any EMPIRICAL theories that
have been freed of their grains of salt or ceteris paribus conditions.
Empirical theories are abstractions off experience -- causally related
concepts. And since the particulars that fall under concepts appear
to be organized, as Wittgenstein noticed, like family resemblances
(they're only partially consistent replicas of the concept), some of
them will fail to conform to predictions made on the basis of
theories.
By definition reality is consistent. If it appears that X and not-X,
then something is wrong with our concept of X, as you are also saying.
Reality itself is what it is, and not what it isn't. However, as to
how it is that one thing is and another isn't, ie why the world is as
it is, and not some other way, mystification pervades. It is pleasing to
suppose that in fact all possibilities *do* obtain somehow (David Lewis,
Hugh Everett), and include their own reasons for existence (reasons,
when there are such, being parts of reality too). This then cuts the
ground from under your following comments:
Even a theory like "every event has some cause" (a theory you'd
probably need to believe if you also believed reality is consistent)
--not at all, if cause means temporally antecedent; there's no need for
time at all in a consistent theory; and no real need for cause either--
has its grain of salt. Such a theory is of course (formally) both
unverifiable and unfalsifiable (see Popper), but there's no adequate
explanation of how the chain of causes was started except perhaps by
--why must it be started? Or why not a self-causer? This sounds odd,
but oddity isn't impossibility; some have suggested that a kind of self-
reference may be fundamental in the nature of *physical* reality (eg J.
Wheeler)--
some uncaused cause or spontaneously-acting prime mover. But if you
accept this, wouldn't you have to chuck the "every event has some
cause" theory? Whoops, there goes your consistency!
This has nothing to do with consistency; it has to do with seld-reference
and why there is a world at all, a very puzzling matter, I agree.
Newton's theory of gravitation (plus his dynamical theory of force and
motion) is a consistent theory. It is not a correct theory, when
interpreted as intended, in part because of the very things you say
(eg the assumption that the concept of time has a certain fixed sense
turns out to obscure similarities-that-are-not-identities). But the
theory remains consistent.
∂24-Nov-83 1748 @MIT-MC:perlis%umcp-cs@CSNET-CIC Tarski and meaning
Received: from MIT-MC by SU-AI with TCP/SMTP; 24 Nov 83 17:48:14 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 24 Nov 83 20:45-EST
Received: From Csnet-Cic.arpa by UDel-Relay via smtp; 24 Nov 83 20:37 EST
Date: 24 Nov 83 20:13:10 EST (Thu)
From: Don Perlis <perlis%umcp-cs@csnet-cic.arpa>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Tarski and meaning
To: John McLean <mclean%nrl-css.arpa@udel-relay.arpa>,
phil-sci%mit-mc.arpa@udel-relay.arpa
Cc: perlis%umcp-cs@csnet-cic.arpa
Via: UMCP-CS; 24 Nov 83 20:16-EST
From Don Perlis (21 Nov)
A final word on meaning: Tarski can be viewed most
fruitfully as providing not a definition of *meaning* but of
the *different* *possible* meanings of a statement in different
contexts.You pick the context you want and then the meaning
comes with it. The meaning is then part of your naive theory,
not external to it. From the outside, of course one can only
see all the different possibilities, and indeed there is in
general no one *right* meaning out there. It is in *my* head
that 'this apple' means the one I have in mind; on the outside
people can speculate on just what I might have meant. Their
ability to get it 'right' (tho this is not a well-defined
notion, as Stitch and others would argue) suggests that we hold
similar naive theories in our heads, but doesn't as far as I
can see show that therer is a 'real' meaning and theory that we
somehow must divine in our cog sci efforts.
From: John McLean <mclean%nrl-css.arpa@UDel-Relay>
Although I agree with the last statement of this passage, I
disagree with about everything else. First of all, Tarski was
no more concerned with meaning than, say, Quine. Tarski was
concerned with "truth" and those semantic notions necessary for
its definition, viz. "denotation" and "satisfaction".
We agree. I am not debating history, but suggesting fruitfulness of
concepts. However, Tarski's views as you present them are in line with
what I mean to say here anyway. Truth (or satisfaction) in a *model*
is what Tarski defined, and a model is what I called a context, ie you
first pick 'meanings' for keywords, and then the rest follows. It's
actually rather trivial, and odd that people make such a fuss over it.
(Not to belittle Tarski, but just to puzzle at the 'philosophical'
tangles people seem to create at something straightforward.) So truth
of a formula for Tarski is relative to a context, and no 'meaning' is
presented beyond that.
One can approach meaning within this framework by saying that
the meaning of a term picks out its reference if it has one in
a given context and the meaning of a sentence is its truth
conditions, but one can hold on to Tarski's definition of
"truth" and reject meanings completely.
Here I disagree. To define truth of a formula S in a model (context)
M, Tarski provided a translation of S into plain informal language, ie
into a relation on M that then either holds or dosen't. This relation
means something, in the ordinary (ie, naive) sense (and easily reduced
to set membership for the fussy). Thus in the usual model N of
arithmetic, the formula (x)(y)(x+y=y+x) *means* that for all natural
numbers x and y, their sum in one order is the same as that in the
other order. Trivial, obvious, appropriate, and meaningful. The
formula in question then is seen to be 'true' in N, because its
'meaning' (translation) is so.
--Don Perlis
∂24-Nov-83 1809 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: reasoning about inconsistency
Received: from MIT-MC by SU-AI with TCP/SMTP; 24 Nov 83 18:09:47 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 24 Nov 83 21:06-EST
Received: From Csnet-Cic.arpa by UDel-Relay via smtp; 24 Nov 83 21:02 EST
Date: 24 Nov 83 20:27:41 EST (Thu)
From: Don Perlis <perlis%umcp-cs@csnet-cic.arpa>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: reasoning about inconsistency
To: MONTALVO%MIT-OZ%mit-mc.arpa@udel-relay.arpa,
phil-sci%MIT-OZ%mit-mc.arpa@udel-relay.arpa,
jerryb%MIT-OZ%mit-mc.arpa@udel-relay.arpa,
KDF%MIT-OZ%mit-mc.arpa@udel-relay.arpa
Cc: MONTALVO%MIT-OZ%mit-mc.arpa@udel-relay.arpa
Via: UMCP-CS; 24 Nov 83 20:32-EST
From: JERRYB@MIT-OZ
The Viewpoint mechanism in Omega solves this problem by
placing theories in viewpoints and allowing one to have a
logical theory in viewpoint A about the structure of the
(possibly contradictory) logical theory in viewpoint B.
Thus reasoned analysis of logical contradictions can be
performed.
From: KDF@MIT-OZ
I'm sure the viewpoint mechanism in Omega is
sufficiently powerful to allow the kind of meta-reasoning
that you allude to, but has anyone actually done it? If
so, how different are the details from the FOL approach?
From: MONTALVO%MIT-OZ%mit-mc.arpa@UDel-Relay
Yes, John Lamping has implemented such an example in FOL, the
MasterMind example in IJCAI-83. As far as I've been able to ferret
out, from talking to both Richard Weyhrauch and Carl Hewitt, the only
real difference between the viewpoint mechanism in Omega and the
context mechanism in FOL (which some people may think is a detail) is
that symbol names in Omega are global, whereas in FOL they are
relative to a context. This may have some consequence in an
application where you want to have the same symbol refer to two
different things depending on context.
In fact, it is not necessary to go to either OMEGA or FOL (Wehyrauch's
system) to reason about inconsistency. A one-tiered system such as
ordinary first-order logic is sufficient. (It is unfortunate that
'FOL' is used for both Wehyrauch's system and any old first-order
system.) All that is needed is to be careful about passing from a
formula's expression as object (mention) to its expression as assertion
(use), so that self-referential paradox is not encountered in assertion
mode.
∂24-Nov-83 2246 @MIT-MC:GAVAN%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 24 Nov 83 22:45:00 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 25 Nov 83 01:41-EST
Received: From Mit-Mc.arpa by UDel-Relay via smtp; 25 Nov 83 1:39 EST
Date: Fri, 25 Nov 1983 01:37 EST
Message-ID: <GAVAN.11970353879.BABYL@MIT-OZ>
From: GAVAN%MIT-OZ@mit-mc.arpa
To: Don Perlis <perlis%umcp-cs@csnet-cic.arpa>
Cc: GAVAN%MIT-OZ%mit-mc.arpa@udel-relay.arpa,
Don Perlis <perlis%umcp-cs%csnet-cic.arpa@udel-relay.arpa>,
phil-sci%mit-oz%mit-mc.arpa@udel-relay.arpa
Subject: limitations of logic
In-reply-to: Msg of 23 Nov 1983 12:00-EST from Don Perlis <perlis%umcp-cs at csnet-cic.arpa>
From: Don Perlis <perlis%umcp-cs at csnet-cic.arpa>
From: GAVAN%MIT-OZ%mit-mc.arpa@UDel-Relay
What makes you so sure reality is consistent? Can you demonstrate its
consistency?
By definition reality is consistent.
By definition of what? Reality or consistency? I guess it would have
to be the latter, since there are very many versions of the former.
Of course, if there are very many versions of the former, there are
probably very many versions of the latter. So, whichever one you
choose, reality or consistency, I'll ask, "If you say that reality is
consistent by definition, whose definition are you referring to?"
If it appears that X and not-X, then something is wrong with our
concept of X, as you are also saying.
That's not what I was saying. What I was saying is that there's
ALWAYS something wrong with our concept of X. There are always
particulars that fall under it (in the extension of the concept) which
do not agree with its intension. If it appears that X and not-X, all
we need is an additional ceteris paribus clause and our theory (or
concept of X) is salvaged. There are no falsifying experiments.
There may not be anything "wrong" with the theory other than the
temporary exclusion (by oversight) of a necessary ceteris paribus
condition. In the face of a supposedly falsifying experiment, the
defender of a theory may simply append a new ceteris paribus condition
to the list.
Reality itself is what it is, and not what it isn't.
You can't even prove that it is. So how can you say that it is what
it is and not what it isn't?
The law of contradiction is a normative law, not an empirical law.
You seem to be trying to demonstrate the necessity of the application
of normative, logical laws to empirical reality by first assuming the
validity of such an application and then arguing from that premise.
No fair. Your argument presumes that its conclusion is correct.
It is pleasing to suppose that in fact all possibilities *do*
obtain somehow (David Lewis, Hugh Everett), and include their own
reasons for existence (reasons, when there are such, being parts
of reality too). This then cuts the ground from under your
following comments:
Even a theory like "every event has some cause" (a theory you'd
probably need to believe if you also believed reality is consistent)
--not at all, if cause means temporally antecedent; there's no need for
time at all in a consistent theory; and no real need for cause either--
Cause does not necessarily mean temporally antecedent, as I can tell
(from your discussion above of "reasons" and "ground") you already
realize. A thing's cause is its reason or ground -- the reason why it
is one way and not another. It could be something in the present
(like a material cause) or in the future (like a final cause), or more
likely some mixture of these. Our preconceived notions of a thing
(formal cause) may also cause it to appear to be the way we think it
is. So, to say that "every event has some cause" is incoherent is to
claim that reality is inconsistent -- that ultimately there is no
sufficient reason for it (reality) and to claim that there can never
be a body of law that explain reality (or any part of it)
consistently.
In my example (it is true) I used efficient causes to show this. I
could have used material cause (there is no ultimate primitive
substance which all substances are composed; what's a gluon made of?)
or I could have used final cause (what is the purpose of reality?) or
I could have used formal cause (we all have different notions of
reality, so how can any one of us assert its consistency?).
has its grain of salt. Such a theory is of course (formally) both
unverifiable and unfalsifiable (see Popper), but there's no adequate
explanation of how the chain of causes was started except perhaps by
--why must it be started? Or why not a self-causer?
If it were a "self-causer" it would be a spontaneous or freely-acting
event. It would have no cause. It would be an event that had no
cause. Sure, reality could be self-causing, but then there would be
no reason to presume that it's consistent. So why demand consistency
in any science of reality?
some uncaused cause or spontaneously-acting prime mover. But if you
accept this, wouldn't you have to chuck the "every event has some
cause" theory? Whoops, there goes your consistency!
This has nothing to do with consistency; it has to do with seld-reference
and why there is a world at all, a very puzzling matter, I agree.
Then what exactly do you mean by consistency? Please explain.
Newton's theory of gravitation (plus his dynamical theory of force and
motion) is a consistent theory. It is not a correct theory, when
interpreted as intended, in part because of the very things you say
(eg the assumption that the concept of time has a certain fixed sense
turns out to obscure similarities-that-are-not-identities). But the
theory remains consistent.
So what good is consistency? If the subject matter of science is an
inconsistent reality, why should we desire consistent scientific
theories? You apparently mean that scientific theories should be
INTERNALLY consistent. But if reality is inherently internally
INCONSISTENT, why would we want scientific laws to have internal
consistency?
∂25-Nov-83 0220 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #55
Received: from SU-SCORE by SU-AI with TCP/SMTP; 25 Nov 83 02:20:01 PST
Date: Thursday, November 24, 1983 5:26PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #55
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Friday, 25 Nov 1983 Volume 1 : Issue 55
Today's Topics:
Implementations - Search
Applications - Eliza
LP Library Update
----------------------------------------------------------------------
Date: Sun 20 Nov 83 19:35:52-PST
From: Pereira@SRI-AI
Subject: Breadth-first Search
Wayne Christopher asked about breadth-first search in Prolog, in
particular how to enumerate all expressions over some vocabulary
of symbols with the shortest expressions first. Here is a short
program in pure Prolog that does the job:
:- op(500,fy,succ).
genterm(T) :-
size(S),
genterm(T,S,0).
size(0).
size(succ N) :- size(N).
genterm(X,S0,S) :-
variable←cost(S0,S).
genterm(C,S0,S) :- constant(C,S0,S).
genterm(T,S0,S) :-
term(T,As,S0,S1),
genargs(As,S1,S).
genargs([],S,S).
genargs([A|As],S0,S) :-
genterm(A,S0,S1),
genargs(As,S1,S).
% Sample data.
% Add a clause for variable←cost if terms with variables are needed.
constant(a,S,S). % zero cost constant
constant(b,succ S,S). % unit cost constant
term(f(X,Y),[X,Y],succ succ S,S).
term(g(X),[X],succ S,S).
The predicate genterm/1 is intended to be used as a generator
of terms to be tested by some other predicate(s). Each solution
of genterm/1 is a different term. Terms are generated in order
of complexity, where the complexity of each constant and function
symbol is given separately in a constant/3 or term/4 clause.
Actually, the first argument of term/4 need not be a single
functor applied to variable arguments, but can be any term,
provided that the variables in it that are to be filled by
other generated terms are listed in the second argument.
-- Fernando Pereira
------------------------------
Date: Fri, 18 Nov 83 12:26:14 pst
From: Cohen%UCBKIM@Berkeley (Shimon Cohen)
I guess we are all fans of Prolog, are we ? Well, some of
us are convinced that Prolog will take over Lisp, ada and
the next 1000 languages, Some are curious to see what it
is all about, AND some are afraid to be left behind the
JAPANese ...
I guess we all agree that most languages are equivalent
to Turing machine meaning: you can do almost anything in
any language, so the qustion is "what makes one language
"better" then the other ? " Usually we compare: Efficiency,
clarity, readability, features (services provided by the
language) mathematical background etc.
My experience with Prolog shows that it is not always
superior to other languages (for example Lisp) but in certain
applications it is !!! So I ask myself ( and all of you out
there ) :
1. Are you really convinced that Prolog should
replace Lisp, pascal, ada or some of them ?
2. In what areas you feel that Prolog is better
and in what it is the same or worse ?
3. Aren't you afraid that Prolog will become
another APL type language, good for very specific
applications ?
I would like to give an Example where Prolog seems to be superior
to Lisp and other languages. I wrote the famous Eliza program in
Prolog, it seems to be almost 3 times shorter then the Lisp
implementation and much more clear, efficient etc.
/* Start of program */
top :- repeat,write(' you: '),main,fail.
main :- read←sent( Sent ),!,
replace←list( Sent, Rsent ),!, /* replace i --> you etc */
key←list( Rsent, Ks ), sort( Ks, Sk ),reverse( Sk, Skeys ),!,
try←keys( Skeys, Rsent, Answer ), /* get an answer */
write(' doc: '),print←reply( Answer ), nl.
pm( [],[] ). /* pattern matcher in Prolog ... see data file */
pm( [X|L1], [X|L2] ) :- pm(L1,L2),!.
pm( [oneof(Options,X) | L1], [X|L2]) :- member(X,Options), pm(L1,L2).
pm( [mvar(X) | L], L2) :- append(X,L1,L2), pm(L,L1).
replace←list( [], [] ).
replace←list( [X|L], [Y|L1] ) :- replace(X,Y),!, replace←list(L,L1).
replace←list( [X|L], [X|L1] ) :- !,replace←list(L,L1).
key←list( [], [key(-1,none)] ).
key←list( [K|L1], [ key(P,K)|L2] ) :- priority(K,P), key←list(L1,L2).
key←list( [K|L1], L2 ) :- key←list(L1,L2).
try←keys( Keys, Rsent, Outsent ) :-
member( key(P,K1), Keys ), /* select a key (generator) */
trule( K1, Ptrn, Nans, Ans ), /* find a rule */
pm( Ptrn, Rsent ),!, /* match ? */
random( Nans, N ), /* get random number upto Nanswers */
get←answer(Ans,N,Outsent). /* get the answer from choices */
get←answer( [A|L], 0, A).
get←answer( [B|L], N, A) :- N1 is N - 1, get←answer(L,N1,A).
/* end of Eliza main system
The following describes the data file for the Eliza (Doctor) program.
The data consists the following:
1. Transformations rules that are of the form:
trule(Keyword,Pattern,Number←of←answers,Answer←list).
where:
a. Pattern is the sentence pattern for the keyword.
b. Number←of←answers - how many available answers for
this keyword+pattern.
c. Answer←list - list of available answer for this
pattern+keyword.
In the patterns:
mvar(A) stands for zero or more words.
A stands for any one word.
oneof(List,A) stand for A is one of the words in List.
Note that in some cases a keyword can acquire the transformation
rule of another keyword and a certain pattern might include
answers of another keyword.
2. Replacement rules in the form:
replace(A,B) - replace A by B.
3. Priority rules in the form:
priority(A,N) - the priority of keyword A is N.
If a keyword does not have a priority rule it is assumed to
be of priority 0.
The priorities are in ascending order.
EXAMPLES:
---------
/* Note the way variables from the pattern are used in the answers */
trule(your,
[mvar(A),your, oneof([mother,father,brother,sister,husband],C),
mvar(B)], 5,
[['Tell',me,more,about,your,family,'.'],
['Is',your,family,important,to,you,'?'],
['Who',else,in,your,family,B,'?'],
['Your',C,'?'],
['What',else,comes,to,your,mind,when,you,
think,of,your,C,'?']]).
. . ..
replace(i,you).
replace(you,i).
replace(computers,computer).
. . .
priority(computer,40).
priority(remember,5).
priority(if,3).
. . .
------------------------------
Date: Thu 24 Nov 83 17:20:04-PST
From: Chuck Restivo <Restivo@SU-SCORE>
Subject: LP Library Update
New versions of Not.Pl and SetOf.Pl have been added to the
PS:<Prolog> directory at {SU-SCORE}. A grammar pre-processor
has also been added, see GConsult.Pl. Thank you Richard O'Keefe.
For those with read only access I have a limited number of hard
copies that can be mailed.
-ed
------------------------------
End of PROLOG Digest
********************
∂25-Nov-83 1520 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 25 Nov 83 15:20:18 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 25 Nov 83 17:48-EST
Date: 25 Nov 83 16:07:28 EST (Fri)
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: limitations of logic
To: phil-sci%mit-oz@mit-mc
Cc: GAVAN%MIT-OZ%mit-mc.arpa@UDEL-RELAY
Via: UMCP-CS; 25 Nov 83 17:35-EST
From: Don Perlis <perlis%umcp-cs at csnet-cic.arpa>
By definition reality is consistent.
From: GAVAN%MIT-OZ@MIT-MC
By definition of what? Reality or consistency? I guess it would have
to be the latter, since there are very many versions of the former.
If it appears that X and not-X, then something is wrong with our
concept of X, as you are also saying.
That's not what I was saying. What I was saying is that there's
ALWAYS something wrong with our concept of X.
Well, sort of. How do you know that for sure? It's possible to overdo
the humble bit; just maybe the world *is* intelligible. Is there
something wrong with our concept of the structure of DNA? *Must* there
be? Are you *sure* we can never get a picture right (in the sense that
it, as such, need never be refurbished?
In any case, you and I seem to agree that X and not-X is something wrong
with the concept of X, whatever else may be wrong. And in such a case,
we *see* the evidence, while in other casese, (DNA) its easy to *say*
something must be wrong, but idle also since it leads nowhere. The
bald contradiction shows the need for a change.
Reality itself is what it is, and not what it isn't.
You can't even prove that it is. So how can you say that it is what
it is and not what it isn't?
I mean not an 'external reality' but simply 'whatever'. To have X *and*
not-X is to deny the meaning of having X. This is what I mean by saying
it is definitional. To have X (internal, external, whatever) is (by the
way we agree to use words, or at least so *I* tend to use them) is a
description that refers to a distinction, between one thing and another
(not-X).
Whether this distinction is present in *reality* is another matter, as
is the issue of what 'reality' means in the first place. I am
addressing not the correctness of theories, but their intelligibility
to us as a part of our scientific endeavors. And consistency
(internal) is a basic requirement here, since otherwise our statements
have no meaning: they make no distinctions, they allow no conclusions
(alternatively, they allow all conclusions).
A thing's cause is its reason or ground -- the reason why it
is one way and not another. It could be something in the present
(like a material cause) or in the future (like a final cause), or more
likely some mixture of these. Our preconceived notions of a thing
(formal cause) may also cause it to appear to be the way we think it
is. So, to say that "every event has some cause" is incoherent is to
claim that reality is inconsistent -- that ultimately there is no
sufficient reason for it (reality) and to claim that there can never
be a body of law that explain reality (or any part of it)
consistently.
I still don't see why cause comes into the theory itself. Do you mean
that if the theory is viewed as a kind of 'cause' (ie a claim about a
cause) then if correct this itself is part of the picture and so should
also be described inside the theory? Well, this is not so hard. It's
simply a language with self-reference. I don't see that any infinite
regress occurs.
Newton's theory of gravitation (plus his dynamical theory
of force and motion) is a consistent theory. It is not a
correct theory, when interpreted as intended, in part
because of the very things you say (eg the assumption that
the concept of time has a certain fixed sense turns out to
obscure similarities-that-are-not-identities). But the
theory remains consistent.
So what good is consistency? If the subject matter of science is an
inconsistent reality, why should we desire consistent scientific
theories? You apparently mean that scientific theories should be
INTERNALLY consistent. But if reality is inherently internally
INCONSISTENT, why would we want scientific laws to have internal
consistency?
Who says reality is internally in-consistent? I say it isn't, and not
because of some fancy property of reality, but simply by what we mean
when we utter a statement: that a certain distinction is being
entertained. It is the job of the scientist to entertain such and to try
to assess them by imaginative comparison to predictions/data. I agree
that concepts shift around, are redefined, but not at random: they must
show promise, and for this they require that the scientist can think
about them, ie recognize imaginatively a distinction between X and
not-X. If it turns out (whatever that means) that the distinction is a
poor one, that in no way bespeaks an inconsistency; it simply tells us
that we were entertaining a picture different from the one we hoped.
I don't mean to say that reality/cause/meaning is/are unproblematic.
On the contrary. I think that science is just beginning to come
to grips with itself, with what it means that such a thing as science
can flourish. I think that law itself will be a matter of deeper scientific
examination. Some physicists have been looking in this direction for awhile.
Among philosophers, I am aware of only Justus Buchler, ("Metaphysics of
natural complexes") in this camp. Do you know his work; and what do you
think of it?
∂25-Nov-83 1603 CLT SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Prof. J. E. Fenstad, University of Oslo
TITLE: Peanos's existence theorem for ordinary differential equations
in reverse mathematics and non standard analysis.
TIME: Wednesday, Nov. 30, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
Abstract:
We continue the exposition of Steve Simpson's work on reverse
mathematics, locating the exact position for the provability
of Peanos's theorem. It follows that the non standard proof is
more constructive than the standard text-book proof.
∂25-Nov-83 1731 @SRI-AI.ARPA:GOGUEN@SRI-CSL rewrite rule seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Nov 83 17:31:17 PST
Received: from SRI-CSL by SRI-AI.ARPA with TCP; Fri 25 Nov 83 17:24:09-PST
Date: 25 Nov 1983 1716-PST
From: GOGUEN at SRI-CSL
Subject: rewrite rule seminar
To: Elspas at SRI-CSL, JGoldberg at SRI-CSL, Goguen at SRI-CSL,
Green at SRI-CSL, DHare at SRI-CSL, Kautz at SRI-CSL, Lamport at SRI-CSL,
Levitt at SRI-CSL, Melliar-Smith at SRI-CSL, Meseguer at SRI-CSL,
Moriconi at SRI-CSL, Neumann at SRI-CSL, Pease at SRI-CSL,
Schwartz at SRI-CSL, Shostak at SRI-CSL, Oakley at SRI-CSL, Crow at SRI-CSL,
Ashcroft at SRI-CSL, Denning at SRI-CSL, Geoff at SRI-CSL,
Rushby at SRI-CSL, Jagan at SRI-CSL, Jouannaud at SRI-CSL,
Nelson at SRI-CSL, Hazlett at SRI-CSL, Lansky at SRI-CSL, Billoir at SRI-CSL
cc: jk at SU-AI, waldinger at SRI-AI, stickel at SRI-AI, pereira at SRI-AI,
clt at SU-AI, csli-friends at SRI-AI, dkanerva at SRI-AI,
briansmith.pa at PARC-MAXC
Jean-Pierre will try to cover the following topics for us:
1. Termination: Kruskal's theorem (without proof), simplification orderings,
Dershowitz's theorem (with proof using Kruskal because simple).
2. Recursive Path Ordering with Status: examples.
3. Equivalence of Church-Rosser and Confluence: proof is an exercise.
4. Noetherian Induction: application to Newman's theorem.
5. Huet's theorem: Local Confluence can be checked on critical pairs
(with proof).
-------
∂26-Nov-83 0339 @MIT-MC:GAVAN%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 26 Nov 83 03:37:26 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 26 Nov 83 06:34-EST
Date: Sat, 26 Nov 1983 06:34 EST
Message-ID: <GAVAN.11970669987.BABYL@MIT-OZ>
From: GAVAN%MIT-OZ@MIT-MC.ARPA
To: Don Perlis <perlis%umcp-cs@CSNET-CIC.ARPA>
Cc: phil-sci%mit-oz@MIT-MC
Subject: limitations of logic
In-reply-to: Msg of 25 Nov 1983 16:07-EST from Don Perlis <perlis%umcp-cs at CSNet-Relay>
From: Don Perlis <perlis%umcp-cs at CSNet-Relay>
From: GAVAN
From: Don Perlis <perlis%umcp-cs at CSNet-Relay>
If it appears that X and not-X, then something is wrong with our
concept of X, as you are also saying.
That's not what I was saying. What I was saying is that there's
ALWAYS something wrong with our concept of X.
Well, sort of. How do you know that for sure? It's possible to overdo
the humble bit; just maybe the world *is* intelligible. Is there
something wrong with our concept of the structure of DNA? *Must* there
be? Are you *sure* we can never get a picture right (in the sense that
it, as such, need never be refurbished?
Yes. We can never get the picture right (completely and consistently)
because we ourselves are part of the picture. We share the faith (and
that's what it is) in the existence of an intelligible world, but the
world you have faith in is a consistent one. The world I have faith
in can't possibly be a consistent one. One reason is that you and I
appear to have faith in the existence of two distinct (albeit
overlapping) worlds.
In any case, you and I seem to agree that X and not-X is something wrong
with the concept of X, whatever else may be wrong. And in such a case,
we *see* the evidence, while in other casese, (DNA) its easy to *say*
something must be wrong, but idle also since it leads nowhere. The
bald contradiction shows the need for a change.
But the theory itself may not need to be discarded. We can simply add
a ceteris paribus condition. Marvin's example (if I recall it
approximately correctly) is that our theory of birds would hold that,
among other things, "birds are animals which can fly". But
ornithologists classify ostriches as birds and ostriches can't fly.
So X and not-X. Is something wrong with our theory of birds? Not
really. We just need a censor (ceteris paribus condition) which says
that the theory's proposition about flying doesn't apply in the case
of the ostrich. The theory (the concept of bird) still stands.
Reality itself is what it is, and not what it isn't.
You can't even prove that it is. So how can you say that it is what
it is and not what it isn't?
I mean not an 'external reality' but simply 'whatever'.
Then you're talking about those aspects of your world which exist because
you posit their existence -- like consistency.
To have X *and* not-X is to deny the meaning of having X. This is
what I mean by saying it is definitional. To have X (internal,
external, whatever) is (by the way we agree to use words, or at
least so *I* tend to use them) is a description that refers to a
distinction, between one thing and another (not-X).
Whether this distinction is present in *reality* is another matter, as
is the issue of what 'reality' means in the first place. I am
addressing not the correctness of theories, but their intelligibility
to us as a part of our scientific endeavors. And consistency
(internal) is a basic requirement here, since otherwise our statements
have no meaning: they make no distinctions, they allow no conclusions
(alternatively, they allow all conclusions).
Okay. For the sake of argument I will agree that consistency is an
important desideratum for a theory, but only for beings who posit
consistency as an important desideratum. I have been disputing the
contention that scientific theories about reality must be consistent
and that if a theory isn't consistent then it doesn't count as a
scientific theory. Scientists, a species of being who do indeed seem
to posit consistency as an important desideratum, maintain consistency
in their theories about (for reasons stated earlier) a fundamentally
inconsistent reality, by adding ceteris paribus clauses to their
theories. Any theoretician may add any number of such clauses to
explain away what would otherwise stand as falsifying experiments. and
his/her theory would still be "consistent". Scientific theories can
be kept consistent by consistently listing their inconsistencies.
Someone is bound to point out that this brings us full circle, and
we're now back to Occam's razor. But I don't think so. Simplicity is
a criterion for believability, not for truth. It may well be that
today's theory of X, replete with innumerable ceteris paribus clauses,
may ultimately prove to be more believable (simple, elegant) than some
simpler challenger (even if the challenger has more explanatory power)
once we develop a more sophisticated theory of Y.
. . . to say that "every event has some cause" is incoherent is to
claim that reality is inconsistent -- that ultimately there is no
sufficient reason for it (reality) -- and to claim that there can
never be a body of law which explains reality (or any part of it)
consistently.
I still don't see why cause comes into the theory itself. Do you mean
that if the theory is viewed as a kind of 'cause' (ie a claim about a
cause) then if correct this itself is part of the picture and so should
also be described inside the theory? Well, this is not so hard. It's
simply a language with self-reference. I don't see that any infinite
regress occurs.
This is not at all what I meant. I meant that you can't demand that
scientific theories about reality be consistent when reality itself is
inconsistent.
The "language with self-reference" conjecture is interesting, but it's
still only a conjecture. How can we ever possibly know that the alpha
is the omega, or that at the base of reality, as its primary
constituent, is the entire universe. This is all very religious.
Newton's theory of gravitation (plus his dynamical theory
of force and motion) is a consistent theory. It is not a
correct theory, when interpreted as intended, in part
because of the very things you say (eg the assumption that
the concept of time has a certain fixed sense turns out to
obscure similarities-that-are-not-identities). But the
theory remains consistent.
So what good is consistency? If the subject matter of science is an
inconsistent reality, why should we desire consistent scientific
theories? You apparently mean that scientific theories should be
INTERNALLY consistent. But if reality is inherently internally
INCONSISTENT, why would we want scientific laws to have internal
consistency?
Who says reality is internally in-consistent?
Me.
I say it isn't, and not because of some fancy property of reality,
but simply by what we mean when we utter a statement: that a
certain distinction is being entertained.
Well, OK. I agree that we try to make distinctions and we try to
maintain consistency about the distinctions we make. However, it
doesn't follow from this that reality must therefore be consistent.
Consistency might well be a mold some of us try to force reality to
conform to.
It is the job of the scientist to entertain such and to try
to assess them by imaginative comparison to predictions/data. I agree
that concepts shift around, are redefined, but not at random: they must
show promise, and for this they require that the scientist can think
about them, ie recognize imaginatively a distinction between X and
not-X. If it turns out (whatever that means) that the distinction is a
poor one, that in no way bespeaks an inconsistency; it simply tells us
that we were entertaining a picture different from the one we hoped.
I don't mean to say that reality/cause/meaning is/are unproblematic.
On the contrary. I think that science is just beginning to come
to grips with itself, with what it means that such a thing as science
can flourish. I think that law itself will be a matter of deeper scientific
examination.
Be wary of the naturalistic fallacy.
Some physicists have been looking in this direction for awhile.
Among philosophers, I am aware of only Justus Buchler, ("Metaphysics of
natural complexes") in this camp. Do you know his work; and what do you
think of it?
I'm only aware of his excellent editing of Peirce's manuscripts. I
own a copy of one of his books, but the title escapes me. Many other
philosophers have attempted to come to grips with this issue. Kant
and Hegel (but see their modern critics) are two of the most
noteworthy.
∂26-Nov-83 1114 GOLUB@SU-SCORE.ARPA Faculty lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 26 Nov 83 11:10:33 PST
Date: Sat 26 Nov 83 11:10:19-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Faculty lunch
To: faculty@SU-SCORE.ARPA
Many persons enjoyed the techincal discussion last week. Does
anyone care to bring up a topic this next week?
GENE
-------
∂26-Nov-83 1311 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 26 Nov 83 13:11:40 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 26 Nov 83 16:03-EST
Date: 26 Nov 83 15:35:32 EST (Sat)
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: limitations of logic
To: GAVAN%MIT-OZ@MIT-MC, Don Perlis <perlis%umcp-cs@CSNet-Relay>
Cc: phil-sci%mit-oz@MIT-MC
Via: UMCP-CS; 26 Nov 83 15:39-EST
From: GAVAN%MIT-OZ@MIT-MC
We can never get the picture right (completely and consistently)
because we ourselves are part of the picture. We share the faith (and
that's what it is) in the existence of an intelligible world, but the
world you have faith in is a consistent one. The world I have faith
in can't possibly be a consistent one. One reason is that you and I
appear to have faith in the existence of two distinct (albeit
overlapping) worlds.
Why does being part of something mean not getting its picture right?
This is not necessarily the case. It *may* be the case, but we don't
*know* that. I'm still looking. So are lots of people.
But [a] theory itself may not need to be discarded. We can simply add
a ceteris paribus condition. Marvin's example (if I recall it
approximately correctly) is that our theory of birds would hold that,
among other things, "birds are animals which can fly". But
ornithologists classify ostriches as birds and ostriches can't fly.
So X and not-X. Is something wrong with our theory of birds? Not
really. We just need a censor (ceteris paribus condition) which says
that the theory's proposition about flying doesn't apply in the case
of the ostrich. The theory (the concept of bird) still stands.
Quite so. But the theory and the concept have therby chamged. The *word*
'bird' itself is no concept. It is in relation to other things such as
its exceptional cases (ostriches) as well as its default properties (such
as flying) that it is a concept. So we don't really have X and not-X at
all.
The "language with self-reference" conjecture is interesting, but it's
still only a conjecture. How can we ever possibly know that the alpha
is the omega, or that at the base of reality, as its primary
constituent, is the entire universe. This is all very religious.
On the contrary, I don't suggest any of this as dogma. I am looking, and
I perceive science as a looking into things, not a certainty about
them, nor even about itself. I speak of consistency only in terms of the
practice of science as I am aware of it. Oftenthe value of an idea is
not its correctness as such, but its fruitfulness in exploring new (or
old) terrain. *If* alpha=omega is true, this in itself may lead us to
awareness of it, by our consideration of this idea. Inspecting such
ideas, suggested by whatever at all, is often a way to get soimewhere.
I don't intend to say it is true that theuniverse is self-referential,
but only that it might be, and that further investigation may show us a
lot. (If we can fugure out what it *means*!)
∂26-Nov-83 1351 @MIT-MC:Batali%MIT-OZ@MIT-MC Consistency and the Real World
Received: from MIT-MC by SU-AI with TCP/SMTP; 26 Nov 83 13:51:44 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 26 Nov 83 16:47-EST
Received: from MIT-LISPM-9 by MIT-OZ via Chaosnet; 26 Nov 83 16:45-EST
Date: Saturday, 26 November 1983, 16:48-EST
From: John Batali <Batali%MIT-OZ@MIT-MC.ARPA>
Subject: Consistency and the Real World
To: perlis%umcp-cs at CSNET-CIC.ARPA, GAVAN%MIT-OZ at MIT-MC
Cc: phil-sci%mit-oz at MIT-MC
In-reply-to: The message of 26 Nov 83 15:35-EST from Don Perlis <perlis%umcp-cs at CSNet-Relay>
It seems that to suppose that the real world is consistent is to
presuppose some notion of consistency that begs the question. Even if
we say something like "consistency means that X and not-X can never be
the case simultaneously" we must assume all sorts of things about what
counts as an X, what the not- operator amounts to, what it means to "be
the case" and on and on. One valid way to state the hypothesis is that
consistency is defined as it is in logic. But then to assume that the
real world is consistent is just to assume that the real world can serve
as a model for (can be represented by) some consistent logical theory.
But that is what we started out discussing.
I don't recall anyone bringing up Godel's theorem yet. Unlike most AI
discussions, I think that the theorem has relevance here. How are we to
resurrect some model-theoretic notion of meaning when this and other
results in meta-mathematics suggest that any sufficiently powerful
formal system will be both incomplete and inconsistent? Lest anyone be
tempted to bring up stock flames on the subject of Godel's theorem, let
me restate what I think should be drawn from the inconsistency
arguments:
The model-theoretic notion of semantics that you get with logic is
inadequate for representational and programming languages. (ONE of the
reasons for believing this is the problems associated with the meanings
of terms in an inconsistent theory.) Whatever logic's worth at keeping
truth straight, a more fine-tuned notion of "meaning" is needed. I
guess that the ultimate claim is that "truth" is just not enough to do
semantics with. Even hard-core logicians agree that:
"Dogs are mammals." and;
"Reagan is President."
Are both true, yet mean very different things. Can anyone propose a
model-theoretic account that can show how this works? There are other
accounts of semantics out there, which suggest ways to do it without
model theory. "Naturalistic" semantics suggests, for example, that
there can be some sort of (perhaps causal) connection between symbols
and what they refer to. It seems that some functionalist philosophers
are suggesting that the pattern of functional relationships among
concepts determines what they mean. Most of these approaches make some
metaphysical committment to objects and relations and then define
meanings of symbols in terms of these. Model-theoretic semantics only
allows "the true" in its ontology. (And I suppose that this is what you
are doing when you say "the world is consistent" -- assumming the
existence of "the true.")
∂26-Nov-83 1537 @MIT-MC:DAM%MIT-OZ@MIT-MC Edited Mailing List
Received: from MIT-MC by SU-AI with TCP/SMTP; 26 Nov 83 15:36:55 PST
Date: Sat, 26 Nov 1983 18:33 EST
Message-ID: <DAM.11970800974.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: JMC@SU-AI.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
Subject: Edited Mailing List
Date: Saturday, 26 November 1983 14:09-EST
From: John McCarthy <JMC at SU-AI>
It even might be worthwhile to have an edited discussion.
It would be much more tolerant than a journal, but not
every contribution would be accepted by the editor who
might use referees if he found it necessary.
I think an edited version of phil-sci is a grand idea. I
propose that there be three editors, one for logic, one for the
philosophy of science, and one for the philosophy of epistemology.
Rather than edit message content the primary function of the editing
should be to ensure that the messages are carefully written, concise,
relevant to the discussion, and non-redundant with other messages.
Perhaps redundant messages could be merged into co-authored messages
after a cycle of refereeing.
A potential author would send a message to one of the editors
(and thus place the messafge in one of the categories). The editor
would then forward the message to a referee. The referees would
then mail the messages back to the editor (so as to remain anonymous)
who would then either send them to the mailing list or return them to
the author with comments from the referee.
Messages should be short (a few pages at most) and turn around
time on refereeing should be about a day.
This would be much less formal than a journal. However it
would be much more useful than a large mailing list becuase both the
amount and quality of the mail could be somewhat controled. One
problem might be that the day delay caused by refereeing would keep
people form sending messages. However if a large enough readership
could be established this might not be a problem.
Would McCarthy volunteer to be the logic editor?
David Mc
∂26-Nov-83 1820 @MIT-MC:JMC@SU-AI
Received: from MIT-MC by SU-AI with TCP/SMTP; 26 Nov 83 18:20:38 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 26 Nov 83 21:17-EST
Date: 26 Nov 83 1818 PST
From: John McCarthy <JMC@SU-AI>
To: phil-sci%oz@MIT-MC
logic.pro[f83,jmc] Proposal for logic in AI mailing list
Here is the message to which DAM has already referred.
I find the phil-sci mailing list somewhat frustrating
because the discussants have too little in common and therefore
spend too much energy arguing. Therefore, I think it might be
worthwhile to have a much narrower list. It even might be
worthwhile to have an edited discussion. It would be much
more tolerant than a journal, but not every contribution would
be accepted by the editor who might use referees if he found
it necessary.
The subject matter would be logic in AI but would not
include Prolog programming, because there is already a discussion
list for that. Its center would be formalization of common
sense facts including naive physics and actions to achieve
goals. It would also include problem solving programs using
logic or logicoid (e.g. STRIPS-like) formalisms. Reason
maintenance would be included also. General considerations,
such as what kinds of formalization of reality are appropriate,
would be included, but the debate about whether reality is consistent
would be left for
phil-sci. When technical terms from logic, e.g. structure,
interpretation and model, are used, participants would be
expected to adhere to the usage standard in logic. For example,
a model of a collection of sentences is an interpretation in
which the sentences are true.
Do you have an interest in such a discussion? What topics
would you like to see included and excluded?
Should there be editing, and, if so, how should it be done?
∂27-Nov-83 0427 @MIT-MC:GAVAN%MIT-OZ@MIT-MC limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 27 Nov 83 04:27:04 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 27 Nov 83 07:24-EST
Date: Sun, 27 Nov 1983 07:23 EST
Message-ID: <GAVAN.11970941110.BABYL@MIT-OZ>
From: GAVAN%MIT-OZ@MIT-MC.ARPA
To: Don Perlis <perlis%umcp-cs@CSNET-CIC.ARPA>
Cc: phil-sci%mit-oz@MIT-MC
Subject: limitations of logic
From: Don Perlis <perlis%umcp-cs at CSNet-Relay>
From: GAVAN%MIT-OZ@MIT-MC
We can never get the picture right (completely and consistently)
because we ourselves are part of the picture. We share the faith (and
that's what it is) in the existence of an intelligible world, but the
world you have faith in is a consistent one. The world I have faith
in can't possibly be a consistent one. One reason is that you and I
appear to have faith in the existence of two distinct (albeit
overlapping) worlds.
Why does being part of something mean not getting its picture right?
This is not necessarily the case. It *may* be the case, but we don't
*know* that. I'm still looking. So are lots of people.
See BATALI's recent contribution.
The "language with self-reference" conjecture is interesting, but it's
still only a conjecture. How can we ever possibly know that the alpha
is the omega, or that at the base of reality, as its primary
constituent, is the entire universe. This is all very religious.
On the contrary, I don't suggest any of this as dogma.
I don't suggest that it is either, only that it's unknowable.
Anyway, the point has been made. Consistency is a mold WE try to force
reality to fit into.
∂27-Nov-83 1032 KJB@SRI-AI.ARPA [Y. Moschovakis <oac5!ynm@UCLA-CS>: Abstract of talk]
Received: from SRI-AI by SU-AI with TCP/SMTP; 27 Nov 83 10:32:30 PST
Date: Sun 27 Nov 83 10:27:52-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: [Y. Moschovakis <oac5!ynm@UCLA-CS>: Abstract of talk]
To: csli-friends@SRI-AI.ARPA, csli-c1@SRI-AI.ARPA
On November 29 and Dec 6 Yiannis Moschovakis will speak to the CSLI
C1-D1 working group, held each Tuesday at 9:30 at PARC.
"On the foundations of the theory of algorithms"
ABSTRACT
These talks will present in outline an abstract (axiomatic)
theory of recursion, which aims to capture the basic properties of
recursion and recursive functions on the integers, much like the
theory of metric spaces captures the basic properties of limits and
continuous functions on the reals. The basic notion of the theory is a
(suitable, mathematical representation of) algorithms. In addition to
classical recursion, the models of the theory include recursion in
higher types, positive elementary induction and similar theories
constructed by logicians, but they also include pure Lisp, recursion
schemes and the familiar programming languages (as algorithm
describers). From the technical point of view, one can view this work
as the theory of many-sorted, concurrent and (more significantly)
second-order recursion schemes.
The first lecture will concentrate on the pure theory of
recursion and describe some of the basic results and directions of
this theory. In the second lecture we will attempt to look at some of
the less developed connections of this theory with the foundations of
computer science, particularly the relation between an algorithm and
its implementations.
-------
∂27-Nov-83 2131 LAWS@SRI-AI.ARPA AIList Digest V1 #103
Received: from SRI-AI by SU-AI with TCP/SMTP; 27 Nov 83 21:30:05 PST
Date: Fri Nov 25, 1983 09:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #103
To: AIList@SRI-AI
AIList Digest Friday, 25 Nov 1983 Volume 1 : Issue 103
Today's Topics:
Alert - Neural Network Simulations & Weizenbaum on The Fifth Generation,
AI Jargon - Why AI is Hard to Read,
AI and Automation - Economic Effects & Reliability,
Conference - Logic Programming Symposium
----------------------------------------------------------------------
Date: Sun, 20 Nov 83 18:05 PST
From: Allen VanGelder <avg@diablo>
Subject: Those interested in AI might want to read ...
[Reprinted from the SU-SCORE bboard.]
[Those interested in AI might want to read ...]
the article in November *Psychology Today* about Francis Crick and Graeme
Michison's neural network simulations. Title is "The Dream Machine", p. 22.
------------------------------
Date: Sun 20 Nov 83 18:50:27-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Re: Those interested in AI might want to read...
[Reprinted from the SU-SCORE bboard.]
I would guess that the "Psychology Today" article is a simplified form of the
Crick & Michelson paper which came out in "Nature" about 2 months ago. Can't
comment on the Psychology Today article, but the Nature article was
stimulating and provocative. The same issue of Nature has a paper (referred to
by Crick) of a simulation which was even better than the Crick paper
(sorry, Francis!).
------------------------------
Date: Mon 21 Nov 83 09:58:04-PST
From: Benjamin Grosof <GROSOF@SUMEX-AIM.ARPA>
Subject: Weizenbaum review of "The Fifth Generation": hot stuff!
[Reprinted from the SU-SCORE bboard.]
The current issue of the NY REview of Books contains a review by Joseph
Weizenbaum of MIT (Author of "Computer Power and Human Reason", I think)
of Feigenbaum and McCorduck's "The Fifth Generation". Warning: it is
scathing and controversial, hence great reading. --Benjamin
------------------------------
Date: Wed 23 Nov 83 14:38:38-PST
From: Wilkins <WILKINS@SRI-AI.ARPA>
Subject: why AI is hard to read
There is one reason much AI literature is hard to read. It is common for
authors to invent a whole new set of jargon to describe their system, instead
of desribing it in some common language (e.g., first order logic) or relating
it to previous well-understood systems or principles. In recent years
there has been an increased awareness of this problem, and hopefully things
are improving and will continue to do so. There are also a lot more
submissions now to IJCAI, etc, so higher standards end up being applied.
Keep truckin'
David Wilkins
------------------------------
Date: 21 Nov 1983 10:54-PST
From: dietz%usc-cse%USC-ECL@SRI-NIC
Reply-to: dietz%USC-ECL@SRI-NIC
Subject: Economic effects of automation
Reply to Marcel Schoppers (AIList 1:101):
I agree that "computers will eliminate some jobs but create others" is
a feeble excuse. There's not much evidence for it. Even if it's true,
those whose jobs skills are devalued will be losers.
But why should this bother me? I don't buy manufactured goods to
employ factory workers, I buy them to gratify my own desires. As a
computer scientist I will not be laid off; indeed, automation will
increase the demand for computer professionals. I will benefit from
the higher quality and lower prices of manufactured goods. Automation
is entirely in my interest. I need no excuse to support it.
... I very much appreciated the idea ... that we should be building
expert systems in economics to help us plan and control the effects of
our research.
This sounds like an awful waste of time to me. We have no idea how to
predict the economic effects of much of anything except at the most
rudimentary levels, and there is no evidence that we will anytime soon
(witness the failure of econometrics). There would be no way to test
the systems. Building expert systems is not a substitute for
understanding.
Automating medicine and law: a much better idea is to eliminate or
scale back the licensing requirements that allow doctors and lawyers to
restrict entry into their fields. This would probably be necessary to
get much benefit from expert systems anyway.
------------------------------
Date: 22 Nov 83 11:27:05-PST (Tue)
From: decvax!genrad!security!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: dciem.501
It seems a little dangerous "to send machines where doctors won't go" -
you'll get the machines treating the poor, and human experts for the
privileged few.
If the machines were good enough, I wouldn't mind being underpriveleged.
I'd rather be flown into a foggy airport by autopilot than human pilot.
Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utcsrgv!dciem!mmt
------------------------------
Date: 22 Nov 1983 13:06:13-EST (Tuesday)
From: Doug DeGroot <Degroot.YKTVMV.IBM@Rand-Relay>
Subject: Logic Programming Symposium (long message)
[Excerpt from a notice in the Prolog Digest.]
1984 International Symposium on Logic Programming
February 6-9, 1984
Atlantic City, New Jersey
BALLY'S PARK PLACE CASINO
Sponsored by the IEEE Computer Society
For more information contact PERIERA@SRI-AI or:
Registration - 1984 ISLP
Doug DeGroot, Program Chairman
IBM Thomas J. Watson Research Center
P.O. Box 218
Yorktown Heights, NY 10598
STATUS Conference Tutorial
Member, IEEE ←← $155 ←← $110
Non-member ←← $180 ←← $125
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Conference Overview
Opening Address:
Prof. J.A. (Alan) Robinson
Syracuse University
Guest Speaker:
Prof. Alain Colmerauer
Univeristy of Aix-Marseille II
Marseille, France
Keynote Speaker:
Dr. Ralph E. Gomory,
IBM Vice President & Director of Research,
IBM Thomas J. Watson Research Center
Tutorial: An Introduction to Prolog
Ken Bowen, Syracuse University
35 Papers, 11 Sessions (11 Countries, 4 Continents)
Preliminary Conference Program
Session 1: Architectures I
←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Parallel Prolog Using Stack Segments on Shared-memory
Multiprocessors
Peter Borgwardt (Univ. Minn)
2. Executing Distributed Prolog Programs on a Broadcast Network
David Scott Warren (SUNY Stony Brook, NY)
3. AND Parallel Prolog in Divided Assertion Set
Hiroshi Nakagawa (Yokohama Nat'l Univ, Japan)
4. Towards a Pipelined Prolog Processor
Evan Tick (Stanford Univ,CA) and David Warren
Session 2: Architectures II
←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Implementing Parallel Prolog on a Multiprocessor Machine
Naoyuki Tamura and Yukio Kaneda (Kobe Univ, Japan)
2. Control of Activities in the OR-Parallel Token Machine
Andrzej Ciepielewski and Seif Haridi (Royal Inst. of
Tech, Sweden)
3. Logic Programming Using Parallel Associative Operations
Steve Taylor, Andy Lowry, Gerald Maguire, Jr., and Sal
Stolfo (Columbia Univ,NY)
Session 3: Parallel Language Issues
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Negation as Failure and Parallelism
Tom Khabaza (Univ. of Sussex, England)
2. A Note on Systems Programming in Concurrent Prolog
David Gelertner (Yale Univ,CT)
3. Fair, Biased, and Self-Balancing Merge Operators in
Concurrent Prolog
Ehud Shaipro (Weizmann Inst. of Tech, Israel)
Session 4: Applications in Prolog
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Editing First-Order Proofs: Programmed Rules vs. Derived Rules
Maria Aponte, Jose Fernandez, and Phillipe Roussel (Simon
Bolivar Univ, Venezuela)
2. Implementing Parallel Algorithms in Concurrent Prolog:
The MAXFLOW Experience
Lisa Hellerstein (MIT,MA) and Ehud Shapiro (Weizmann
Inst. of Tech, Israel)
Session 5: Knowledge Representation and Data Bases
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. A Knowledge Assimilation Method for Logic Databases
T. Miyachi, S. Kunifuji, H. Kitakami, K. Furukawa, A.
Takeuchi, and H. Yokota (ICOT, Japan)
2. Knowledge Representation in Prolog/KR
Hideyuki Nakashima (Electrotechnical Laboratory, Japan)
3. A Methodology for Implementation of a Knowledge
Acquisition System
H. Kitakami, S. Kunifuji, T. Miyachi, and K. Furukawa
(ICOT, Japan)
Session 6: Logic Programming plus Functional Programming - I
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. FUNLOG = Functions + Logic: A Computational Model
Integrating Functional and Logical Programming
P.A. Subrahmanyam and J.-H. You (Univ of Utah)
2. On Implementing Prolog in Functional Programming
Mats Carlsson (Uppsala Univ, Sweden)
3. On the Integration of Logic Programming and Functional Programming
R. Barbuti, M. Bellia, G. Levi, and M. Martelli (Univ. of
Pisa and CNUCE-CNR, Italy)
Session 7: Logic Programming plus Functional Programming- II
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Stream-Based Execution of Logic Programs
Gary Lindstrom and Prakash Panangaden (Univ of Utah)
2. Logic Programming on an FFP Machine
Bruce Smith (Univ. of North Carolina at Chapel Hill)
3. Transformation of Logic Programs into Functional Programs
Uday S. Reddy (Univ of Utah)
Session 8: Logic Programming Implementation Issues
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. Efficient Prolog Memory Management for Flexible Control Strategies
David Scott Warren (SUNY at Stony Brook, NY)
2. Indexing Prolog Clauses via Superimposed Code Words and
Field Encoded Words
Michael J. Wise and David M.W. Powers, (Univ of New South
Wales, Australia)
3. A Prolog Technology Theorem Prover
Mark E. Stickel, (SRI, CA)
Session 9: Grammars and Parsing
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. A Bottom-up Parser Based on Predicate Logic: A Survey of
the Formalism and Its Implementation Technique
K. Uehara, R. Ochitani, O. Kakusho, and J. Toyoda (Osaka
Univ, Japan)
2. Natural Language Semantics: A Logic Programming Approach
Antonio Porto and Miguel Filgueiras (Univ Nova de Lisboa,
Portugal)
3. Definite Clause Translation Grammars
Harvey Abramson, (Univ. of British Columbia, Canada)
Session 10: Aspects of Logic Programming Languages
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. A Primitive for the Control of Logic Programs
Kenneth M. Kahn (Uppsala Univ, Sweden)
2. LUCID-style Programming in Logic
Derek Brough (Imperial College, England) and Maarten H.
van Emden (Univ. of Waterloo, Canada)
3. Semantics of a Logic Programming Language with a
Reducibility Predicate
Hisao Tamaki (Ibaraki Univ, Japan)
4. Object-Oriented Programming in Prolog
Carlo Zaniolo (Bell Labs, New Jersey)
Session 11: Theory of Logic Programming
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
1. The Occur-check Problem in Prolog
David Plaisted (Univ of Illinois)
2. Stepwise Development of Operational and Denotational
Semantics for Prolog
Neil D. Jones (Datalogisk Inst, Denmark) and Alan Mycroft
(Edinburgh Univ, Scotland)
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
An Introduction to Prolog
A Tutorial by Dr. Ken Bowen
Outline of the Tutorial
- AN OVERVIEW OF PROLOG
- Facts, Databases, Queries, and Rules in Prolog
- Variables, Matching, and Unification
- Search Spaces and Program Execution
- Non-determinism and Control of Program Execution
- Natural Language Processing with Prolog
- Compiler Writing with Prolog
- An Overview of Available Prologs
Who Should Take the Tutorial
The tutorial is intended for both managers and programmers
interested in understanding the basics of logic programming
and especially the language Prolog. The course will focus on
direct applications of Prolog, such as natural language
processing and compiler writing, in order to show the power
of logic programming. Several different commercially
available Prologs will be discussed and compared.
About the Instructor
Dr. Ken Bowen is a member of the Logic Programming Research
Group at Syracuse University in New York, where he is also a
Professor in the School of Computer and Information
Sciences. He has authored many papers in the field of logic
and logic programming. He is considered to be an expert on
the Prolog programming language.
------------------------------
End of AIList Digest
********************
∂28-Nov-83 0709 @MIT-MC:mclean@NRL-CSS tarski and meaning, again
Received: from MIT-MC by SU-AI with TCP/SMTP; 28 Nov 83 07:09:41 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 28 Nov 83 10:07-EST
From: John McLean <mclean@NRL-CSS>
Date: Mon, 28 Nov 83 10:02:54 EST
To: phil-sci at mit-mc
Subject: tarski and meaning, again
From JCMA%MIT-OZ@MIT-MC.ARPA:
Note that the capacity to produce a program requires an intensional theory
of the behavior to be evinced. Thus, while unimplemented theories can be
examined according to behavioral criteria, implemented theories can be
compared both behaviorally and intensionally. Where does this leave
behavioral cognitive scientists and meaning theorists?
This leaves cognitive scientists and meaning theorists exactly as they have
been with respect to humans all this time. My "internal theory of reference"
may be different from yours, but this is makes no difference with respect to
communication as long as they are behavioristically indistinguishable.
From Don Perlis:
To define truth of a formula S in a model (context)
M, Tarski provided a translation of S into plain informal language, ie
into a relation on M that then either holds or dosen't. This relation
means something, in the ordinary (ie, naive) sense (and easily reduced
to set membership for the fussy). Thus in the usual model N of
arithmetic, the formula (x)(y)(x+y=y+x) *means* that for all natural
numbers x and y, their sum in one order is the same as that in the
other order. Trivial, obvious, appropriate, and meaningful. The
formula in question then is seen to be 'true' in N, because its
'meaning' (translation) is so.
In stating that one can accept Tarski's definition of truth while rejecting
meanings completely I was thinking primarily of Quine. For Quine translation
does not preserve meaning, but only behavioristically discernable behavior.
As a consequence, translation is indeterminate since any translation manual
that preserves behavior is equally correct. This constitutes a rejection of
the traditional concept of "meaning" since meanings were regarded as what
could separate a correct behavioristically adequate translation manual from
an incorrect one. With respect to your example, why doesn't '(x)(y)(x+y=y+x)'
mean that for all ordinals x and y, their ordinal sum is invariant with respect
to order?
John McLean
∂28-Nov-83 0730 @MIT-MC:mclean@NRL-CSS tarski on meaning, again
Received: from MIT-MC by SU-AI with TCP/SMTP; 28 Nov 83 07:30:29 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 28 Nov 83 10:27-EST
From: John McLean <mclean@NRL-CSS>
Date: Mon, 28 Nov 83 10:18:56 EST
To: PHIL-SCI at MIT-MC
Subject: tarski on meaning, again
From JCMA%MIT-OZ@MIT-MC.ARPA:
Note that the capacity to produce a program requires an intensional theory
of the behavior to be evinced. Thus, while unimplemented theories can be
examined according to behavioral criteria, implemented theories can be
compared both behaviorally and intensionally. Where does this leave
behavioral cognitive scientists and meaning theorists?
This leaves cognitive scientists and meaning theorists exactly as they have
been with respect to humans all this time. My "internal theory of reference"
may be different from yours, but this is makes no difference with respect to
communication as long as they are behavioristically indistinguishable.
From Don Perlis:
To define truth of a formula S in a model (context)
M, Tarski provided a translation of S into plain informal language, ie
into a relation on M that then either holds or dosen't. This relation
means something, in the ordinary (ie, naive) sense (and easily reduced
to set membership for the fussy). Thus in the usual model N of
arithmetic, the formula (x)(y)(x+y=y+x) *means* that for all natural
numbers x and y, their sum in one order is the same as that in the
other order. Trivial, obvious, appropriate, and meaningful. The
formula in question then is seen to be 'true' in N, because its
'meaning' (translation) is so.
In stating that one can accept Tarski's definition of truth while rejecting
meanings completely I was thinking primarily of Quine. For Quine translation
does not preserve meaning, but only behavioristically discernable behavior.
As a consequence, translation is indeterminate since any translation manual
that preserves behavior is equally correct. This constitutes a rejection of
the traditional concept of "meaning" since meanings were regarded as what
could separate a correct behavioristically adequate translation manual from
an incorrect one. With respect to your example, why doesn't '(x)(y)(x+y=y+x)'
mean that for all ordinals x and y, their ordinal sum is invariant with respect
to order?
John McLean
∂28-Nov-83 0741 @MIT-MC:DAM%MIT-OZ@MIT-MC Model Theoretic Ontologies
Received: from MIT-MC by SU-AI with TCP/SMTP; 28 Nov 83 07:41:18 PST
Date: Mon, 28 Nov 1983 10:33 EST
Message-ID: <DAM.11971237931.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: BATALI%MIT-OZ@MIT-MC.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
Subject: Model Theoretic Ontologies
Date: Saturday, 26 November 1983, 16:48-EST
From: John Batali <Batali%MIT-OZ at MIT-MC.ARPA>
The model-theoretic notion of semantics that you get with logic is
inadequate for representational and programming languages. ... I
guess that the ultimate claim is that "truth" is just not enough to do
semantics with. Even hard-core logicians agree that:
"Dogs are mammals." and;
"Reagan is President."
Are both true, yet mean very different things. Can anyone propose a
model-theoretic account that can show how this works?
I'm not sure that all logicians would agree on a the meaning
of "meaning", but I do not interpret "meaning" to be "truth value".
In Tarskian semantics each sentence is either trut or false IN A
MODEL. Modern logicians never talk about the "real world", though
presumably the real world can be approxiamted by a Tarskian model. I
take the extensional "meaning" of a sentence to be its TRUTH
CONDITIONS, i.e. the truth function on models associated with the
sentence.
Model-theoretic semantics only allows "the true" in its ontology.
NO! The ontology of model-theoretic semantics is given by the
models. First order logic has one particular kind of model but richer
logics could have richer kinds of models and yet still be based on
Tarskian semantics. Thus Tarskian semantics allows for arbitrarilly
rich ontologies!!
David Mc
∂28-Nov-83 0919 KJB@SRI-AI.ARPA Press Release
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Nov 83 09:19:12 PST
Date: Mon 28 Nov 83 09:08:36-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Press Release
To: csli-folks@SRI-AI.ARPA
I hear that a press release has appeared in the Palo Alto Weekly and the
Times-Tribune. I have not seen it, but from what I hear, it bears
little resemblance to the one I passed on the Charlie over a month ago.
He hired someone to do another version, I am told, but I have not
spoken to her.
-------
∂28-Nov-83 0930 @MIT-MC:Batali%MIT-OZ@MIT-MC Model Theoretic Ontologies
Received: from MIT-MC by SU-AI with TCP/SMTP; 28 Nov 83 09:30:28 PST
Received: from MIT-LISPM-9 by MIT-OZ via Chaosnet; 28 Nov 83 11:57-EST
Date: Monday, 28 November 1983, 12:00-EST
From: John Batali <Batali%MIT-OZ@MIT-MC.ARPA>
Subject: Model Theoretic Ontologies
To: DAM%MIT-OZ@MIT-MC.ARPA, BATALI%MIT-OZ@MIT-MC.ARPA
Cc: phil-sci%MIT-OZ@MIT-MC.ARPA
In-reply-to: <DAM.11971237931.BABYL@MIT-OZ>
From: DAM@MIT-OZ
I
take the extensional "meaning" of a sentence to be its TRUTH
CONDITIONS, i.e. the truth function on models associated with the
sentence.
I assume by this that the difference in meaning between these two sentences:
"Dogs are mammals." and;
"Reagan is President."
Is that the truth function of one is different from the truth function
of the other, even though the value of the function is the same (namely
true). I guess that this is a start, because we can now start
individuating meanings at least as much as we can individuate functions.
Of course just being able to individuate meanings is not to be able to
understand them.
The ontology of model-theoretic semantics is given by the
models. First order logic has one particular kind of model but richer
logics could have richer kinds of models and yet still be based on
Tarskian semantics. Thus Tarskian semantics allows for arbitrarilly
rich ontologies!!
Okay fine. It sounds like the claim is that Tarskian semantics ALLOWS
for arbitrarily rich ontologies. But to really get representation
right, we have to HAVE an adequately rich ontology. I was arguing, and
I think that you accept, that a model in which statements are either
true or not is just insufficient. And I think that the point of Carl's
polemic against "logic programming" is that such is all the model you
seem to get in what logic programmers currently call logic.
But to do programming, there must be some notion of "process" in the
ontology, some idea of things happening in some temporal relation.
There must be some notion of the kinds of objects that there can be, and
what sorts of relations can hold among them. The point is that a good
representation language has to be more than just "logic." It must be,
say, logic with some ontological committments as to what sorts of things
are out there to be described. In such a case we would be USING logic
to do representation, but we would not be using JUST logic. Logic alone
is inadequate, the argument goes because it, by itself, presupposes only
the assumption that "the true" exists. Presuppositions of, say,
processes, and time and so on can be represented in logic, but in this
case we are using logic to represent our theory of the world and
"meanings" are defined in terms of that theory, not logic.
∂28-Nov-83 1001 @SRI-AI.ARPA:donahue.pa@PARC-MAXC.ARPA 1:30 Tues. Nov. 29: Computing Seminar: Luca Cardelli (Bell
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Nov 83 10:01:46 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Mon 28 Nov 83 09:53:02-PST
Date: 28 Nov 83 9:37:19 PST
From: donahue.pa@PARC-MAXC.ARPA
Subject: 1:30 Tues. Nov. 29: Computing Seminar: Luca Cardelli (Bell
Labs) "ML: A Language and its Types" - CSL Commons
To: ComputingSeminar↑.pa@PARC-MAXC.ARPA,
ComputingSeminarRemote↑.pa@PARC-MAXC.ARPA
Cc: csli-friends@sri-ai.ARPA, CSLI-C1@sri-ai.ARPA,
Anderson.pa@PARC-MAXC.ARPA, Lee.pa@PARC-MAXC.ARPA,
Methodology↑.pa@PARC-MAXC.ARPA, Lynn.ES@PARC-MAXC.ARPA,
Marshall.WBST@PARC-MAXC.ARPA
Reply-To: Donahue.pa@PARC-MAXC.ARPA
Speaker: Luca Cardelli (Bell Labs)
Title: ML: A Language and its Types
Abstract: ML is an interactive, statically-scoped functional programming
language. Functions are first class objects which can be passed as
parameters, returned as values and embedded in data structures.
Higher-order functions (i.e. functions receiving or producing other
functions) are used extensively.
ML is a strongly typed language. Every ML expression has a type, which
is determined statically. The type of an expression is usually
automatically inferred by the system, without need of type definitions.
The type system is polymorphic, conferring on the language much of the
flexibility of type-free languages, without paying the conceptual cost
of run-time type errors or the computational cost of run-time
typechecking.
Other features include parametric and abstract types, pattern matching,
exceptions, modules and streams. An ML compiler is available under Unix.
>> If you'd like to hear an audio tape of this talk, send a message to
Don <Lynn.ES> if you're a Southlander,
Sidney <Marshall.WBST> if you're a Webster,
Kathi <Anderson.PA>, otherwise
∂28-Nov-83 1051 @SRI-AI.ARPA:GOGUEN@SRI-CSL this week's rewrite seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Nov 83 10:51:37 PST
Received: from SRI-CSL by SRI-AI.ARPA with TCP; Mon 28 Nov 83 10:49:30-PST
Date: 28 Nov 1983 1040-PST
From: GOGUEN at SRI-CSL
Subject: this week's rewrite seminar
To: Elspas at SRI-CSL, JGoldberg at SRI-CSL, Goguen at SRI-CSL,
Green at SRI-CSL, DHare at SRI-CSL, Kautz at SRI-CSL, Lamport at SRI-CSL,
Levitt at SRI-CSL, Melliar-Smith at SRI-CSL, Meseguer at SRI-CSL,
Moriconi at SRI-CSL, Neumann at SRI-CSL, Pease at SRI-CSL,
Schwartz at SRI-CSL, Shostak at SRI-CSL, Oakley at SRI-CSL, Crow at SRI-CSL,
Ashcroft at SRI-CSL, Denning at SRI-CSL, Geoff at SRI-CSL,
Rushby at SRI-CSL, Jagan at SRI-CSL, Jouannaud at SRI-CSL,
Nelson at SRI-CSL, Hazlett at SRI-CSL, Lansky at SRI-CSL, Billoir at SRI-CSL
cc: jk at SU-AI, waldinger at SRI-AI, stickel at SRI-AI, pereira at SRI-AI,
clt at SU-AI, kbj at SRI-AI, csli-friends at SRI-AI, dkanerva at SRI-AI,
briansmith.pa at PARC-MAXC
On Friday, 2 December 1983, at 3 pm, Jean-Pierre Jouannoud will try to cover
the following topics for us:
1. Termination: Kruskal's theorem (without proof), simplification orderings,
Dershowitz's theorem (with proof using Kruskal because simple).
2. Recursive Path Ordering with Status: examples.
3. Equivalence of Church-Rosser and Confluence: proof is an exercise.
4. Noetherian Induction: application to Newman's theorem.
5. Huet's theorem: Local Confluence can be checked on critical pairs
(with proof).
-------
∂28-Nov-83 1145 @MIT-MC:crummer@AEROSPACE Autopoiesis and Self-Referential Systems
Received: from MIT-MC by SU-AI with TCP/SMTP; 28 Nov 83 11:45:06 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 28 Nov 83 14:41-EST
Date: 28 November 1983 1137-PST (Monday)
From: crummer at AEROSPACE (Charlie Crummer)
Subject: Autopoiesis and Self-Referential Systems
To: PHIL-SCI at MIT-MC
I am new to the discussion group so forgive me if this is redundant.
I have been reading lately about so-called "autopoietic" system, i.e.
systems which produce themselves (they may also reproduce themselves but
that is something else). The concept comes fr the biologists Humberto
Maturana, Fransisco Varela, and other. An example of an autopoietic system
is a living cell. It establishes and maintains its own integrity from within.
This is an interesting concept and may have use in describing political and
other organizational systems.
Another interesting example may be the class of non-abelian gauge fields in
elementary particle theory. Non-abelian fields are self-interacting and carry
their own charge i.e. may be their own sour Questions like "W,here doesoes a
gauge particle come from?" may be meaningless.
--Charlie
∂28-Nov-83 1307 ALMOG@SRI-AI.ARPA Reminder on why context wont go away
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Nov 83 13:07:26 PST
Date: 28 Nov 1983 1301-PST
From: Almog at SRI-AI
Subject: Reminder on why context wont go away
To: csli-friends at SRI-AI
Tomorrow, Tuesday 11.29.83, we meet at 2.30(!) at Ventura. (please note:
2.30, NOT 3.15). The speaker will be Peter Gardenfors from Lund university
Sweden. Prof. Gardenfors is visiting CSLI this year. His talk will be on:
"An Epistemic Semantics for Conditionals".
Next week's speaker: Ivan Sag.
I attach an abstract of Gardenfors' talk. Peter Gardenfors
Lund University, Sweden
2.30 ,Ventura Hall,11.29.83
Talk: An Epistemic Semantics for Conditionals
A semantics for different kinds of conditional sentences will
be outlined. The ontological basis is states of belief and changes of belief
rather than possible worlds and similarities between worlds. It will be
shown how the semantic analysis can account for some of the context
dependence of the interpretation of conditionals.
-------
∂28-Nov-83 1322 @SRI-AI.ARPA:TW@SU-AI Abstract for Talkware seminar Wed - Amy Lansky
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Nov 83 13:21:52 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Mon 28 Nov 83 13:21:44-PST
Date: 28 Nov 83 1318 PST
From: Terry Winograd <TW@SU-AI>
Subject: Abstract for Talkware seminar Wed - Amy Lansky
To: "@377.DIS[1,TW]"@SU-AI
Date: November 30
Speaker: Amy Lansky (Stanford / SRI)
Topic: Specification of Concurrent Systems
Time: 2:15 - 4
Place: 380Y (Math corner)
This talk will describe the use of GEM: an event-oriented model for
specifying and verifying properties of concurrent systems.
The GEM model may be broken up into two components: computations and
specifications. A GEM computation is a formal representation of concurrent
execution. Program executions, as well as activity in other domains may
be modeled. A GEM specification is a set of logic formulae which may be
applied to GEM computations. These formulae are used to restrict
computations in such a way that they form characterizations of
specific problems, or represent executions of specific languages.
A primary result of my research with GEM has been a demonstration of the power
and breadth of an event-oriented approach to concurrent activity.
The model has been used successfully to describe various
language primitives (the Monitor, CSP, ADA tasks), several problems,
including two distributed algorithms, and for verifying concurrent
programs.
In this seminar I will introduce some of the important features of GEM
and demonstrate their use in modeling many familiar computational
behavior patterns including: sequentiality, nondeterminism, priority,
liveness, fairness, and scope. Specification of language concepts
such as data abstraction, primitives such as CSP's synchronous I/O,
as well as familiar problems (Readers/Writers) will be included.
This talk will also discuss directions for further research based on GEM.
One possibility is the use of graphics for the construction and simulation of
GEM specifications.
∂28-Nov-83 1351 @MIT-MC:Tong.PA@PARC-MAXC Re: Autopoiesis and Self-Referential Systems
Received: from MIT-MC by SU-AI with TCP/SMTP; 28 Nov 83 13:49:43 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 28 Nov 83 16:40-EST
Date: Mon, 28 Nov 83 13:35 PST
From: Tong.PA@PARC-MAXC.ARPA
Subject: Re: Autopoiesis and Self-Referential Systems
In-reply-to: "crummer@AEROSPACE.ARPA's message of 28 Nov 83 11:37 PST
(Monday)"
To: crummer@AEROSPACE.ARPA (Charlie Crummer)
cc: PHIL-SCI@MIT-MC.ARPA
Charlie,
What are you after? I assume you've tossed out the concept of
autopoiesis to solicit reactions, but of what nature? Any or all of the
following might be what you want:
-------------------------------
What is an autopoietic system?
Surely "self-producing system" is inadequate, if only because that is
just as unclear as "autopoietic system". "[A cell] establishes and
maintains its own integrity from within." What does "integrity" mean?
The cell doesn't physically collapse? You surely don't mean the cell
does not exist in or depend upon an environment. I understand an
autopoietic system to be one that is *structure-coupled* to its
environment. You perhaps want a discussion of this term.
Are autopoietic systems self-referential systems?
You mention self-reference in your msg header, but make no further
reference to it.
Are human beings examples of autopoietic systems?
You mention cells, and speculate on organizations, but you left out an
extremely important intermediary example.
What can we gain by using the notion of autopoiesis?
There would be no point in pursuing a discussion on autopoiesis if the
result would be like trying to define "intelligence" or "life".
-------------------------------
Why don't you give preliminary answers to these questions, so we can
understand what manner of beast it is you wish us to study.
Chris
∂28-Nov-83 1357 LAWS@SRI-AI.ARPA AIList Digest V1 #104
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Nov 83 13:56:35 PST
Date: Mon 28 Nov 1983 09:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #104
To: AIList@SRI-AI
AIList Digest Monday, 28 Nov 1983 Volume 1 : Issue 104
Today's Topics:
Information Retrieval - Request,
Programming Languages - Lisp Productivity,
AI and Society - Expert Systems,
AI Funding - Capitalistic AI,
Humor - Problem with Horn Clauses,
Seminar - Introspective Problem Solver,
Graduate Program - Social Impacts at UC-Irvine
----------------------------------------------------------------------
Date: Sun, 27 Nov 83 11:41 EST
From: Ed Fox <fox.vpi@Rand-Relay>
Subject: Request for machine readable volumes, info. on retrieval
projects
Please send details of how to obtain any machine readable documents such
as books, reference volumes, encyclopedias, dictionaries, journals, etc.
These would be utilized for experiments in information retrieval. This
is not aimed at large bibliographic databases but rather at finding
a few medium to long items that exist both in book form and full text
computer tape versions (readable under UNIX or VMS).
Information on existing or planned projects for retrieval of passages
(e.g., paragraphs or pages) from books, encyclopedias, electronic mail
digests, etc. would also be helpful.
I look forward to your reply. Thanks in advance, Ed Fox.
Dr. Edward A. Fox, Dept. of Computer Science, 562 McBryde Hall,
Virginia Polytechnic Institute and State University (VPI&SU or Virginia Tech),
Blacksburg, VA 24061; (703)961-5113 or 6931; fox%vpi@csnet-relay via csnet,
foxea%vpivm1.bitnet@berkeley via bitnet
------------------------------
Date: 25 Nov 83 22:47:27-PST (Fri)
From: pur-ee!uiucdcs!smu!leff @ Ucb-Vax
Subject: lisp productivity question - (nf)
Article-I.D.: uiucdcs.4149
Is anybody aware of study's on productivity studies for lisp?
1. Can lisp programmers program in lisp at the same number of
lines per day,week,month as in 'regular' languages like pascal, pl/1, etc.
2. Has anybody tried to write a fairly large program that normally would
be done in lisp in a regular language and compared the number of lines
ratio.
In APL, a letter to Comm. ACM reported that APL programs took one fifth
the number of lines as equivalent programs in regular language and took
about twice as long per line to debug. Thus APL improved the productivity
to get a function done by about a factor of two. I am curious if anything
similar has been done for lisp.
[One can, off course, write any APL program body as a single line.
I suspect it would not take much longer to write that way, but it
would be impossible to modify a week later. Much the same could be
said for undocumented and poorly structured Lisp code. -- KIL]
------------------------------
Date: 22 Nov 83 21:01:33-PST (Tue)
From: decvax!genrad!grkermit!masscomp!clyde!akgua!psuvax!lewis @ Ucb-Vax
Subject: Re:Re: just a reminder... - (nf)
Article-I.D.: psuvax.359
Why should it be dangerous to have machines treating the poor? There
is no reason to believe that human experts will always be superior to
machines; in fact, a carefully designed expert system could embody all
the skill of the world's best diagnosticians. In addition, an expert
system would never get tired or complain about its pay. On the
other hand, perhaps you are worried about the machine lacking 'human'
insight or compassion. I don't think anyone is suggesting that these
qualities can or should be built into such a system. Perhaps we will
see a new generation of medical personnel whose job will be to use the
available AI facilities to make the most accurate diagnoses, and help
patients interface with the system. This will provide patients with
the best medical knowledge available, and still allow personal interaction
between patients and technicians.
-jim lewis
psuvax!lewis
------------------------------
Date: 24 Nov 83 22:46:53-PST (Thu)
From: pur-ee!uiucdcs!uokvax!emjej @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: uiucdcs.4127
Re sending machines where doctors won't go: do you really think that it's
better that poor people not be treated at all than treated by a machine?
That's a bit much for me to swallow.
James Jones
------------------------------
Date: 22 Nov 83 19:37:14-PST (Tue)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Capitalistic AICapitalistic AI - (nf)
Article-I.D.: uiucdcs.4071
Have you had your advisor leave to make megabucks in industry?
Seriously, I feel that this is a major problem for AI. There
is an extremely limited number of AI professors and a huge demand from
venture capitalists to set them up in a new company. Even fresh PhD's
are going to be disappearing into industry when they can make several
times the money they would in academia. The result is an acute (no
make that terminal) shortage of professors to oversee the new research
generation. The monetary imbalance can only grow as AI grows.
At this university (UI) there are lots (hundreds?) of undergrads
who want to study AI; and about 8 professors to teach them. Maybe the
federal government ought to recognize that this imbalance hurts our
technological competitiveness. What will prevent academic flight?
Will IBM, Digital, and WANG support professors or will they start
hiring them away?
Here are a few things needed to keep the schools strong:
1) Higher salaries for profs in "critical areas."
(maybe much higher)
2) Long term funding of research centers.
(buildings, equipment, staff)
3) University administration support for capitalizing
on the results of research, either through making
it easy for a professor to maintain a dual life, or
by setting up a university owned company to develop
and sell the results of research.
------------------------------
Date: 14 Nov 83 17:26:03-PST (Mon)
From: harpo!floyd!clyde!akgua!psuvax!burdvax!sjuvax!bbanerje @ Ucb-Vax
Subject: Problem with Horn Clauses.
Article-I.D.: sjuvax.140
As a novice to Prolog, I have a problem determining whether a
clause is Horn, or non Horn.
I understand that a clause of the form :
A + ~B + ~C is a Horn Clause,
While one of the form :
A + B + ~C is non Horn.
However, my problem comes when trying to determine if the
following Clause is Horn or non-Horn.
!
------------\
/ ← \
/←←←←←←←←← / \←←**
←# # **
(← o o ←) ←←←←←←←←←←
xx ! xx ! HO HO HO !
xxx \←/xxx ←←/-----------
xxxxxxxxxx
Happy Holidays Everyone!
-- Binayak Banerjee
{bpa!astrovax!burdvax}!sjuvax!bbanerje
------------------------------
Date: 11/23/83 11:48:29
From: AGRE
Subject: John Batali at the AI Revolving Seminar 30 November
[Forwarded by SASW@MIT-MC]
John Batali
Trying to build an introspective problem-solver
Wednesday 30 November at 4PM
545 Tech Sq 8th floor playroom
Abstract:
I'm trying to write a program that understands how it works, and uses
that understanding to modify and improve its performance. In this
talk, I'll describe what I mean by "an introspective problem-solver",
discuss why such a thing would be useful, and give some ideas about
how one might work.
We want to be able to represent how and why some course of action is
better than another in certain situations. If we take reasoning to be
a kind of action, then we want to be able to represent considerations
that might be relevant during the process of reasoning. For this
knowledge to be useful the program must be able to reason about itself
reasoning, and the program must be able to affect itself by its
decisions.
A program built on these lines cannot think about every step of its
reasoning -- because it would never stop thinking about "how to think
about" whatever it is thinking about. On the other hand, we want it
to be possible for the program to consider any and all of its
reasoning steps. The solution to this dilemma may be a kind of
"virtual reasoning" in which a program can exert reasoned control over
all aspects of its reasoning process even if it does not explicitly
consider each step. This could be implemented by having the program
construct general reasoning plans which are then run like programs in
specific situations. The program must also be able to modify
reasoning plans if they are discovered to be faulty. A program with
this ability could then represent itself as an instance of a reasoning
plan.
Brian Smith's 3-LISP achieves what he calls "reflective" access and
causal connection: A 3-LISP program can examine and modify the state
of its interpreter as it is running. The technical tricks needed to
make this work will also find their place in an introspective
problem-solver.
My work has involved trying to make sense of these issues, as well as
working on a representation of planning and acting that can deal with
real world goals and constraints as well as with those of the planning
and plan-execution processes.
------------------------------
Date: 25 Nov 1983 1413-PST
From: Rob-Kling <Kling.UCI-20B@Rand-Relay>
Subject: Social Impacts Graduate Program at UC-Irvine
CORPS
-------
A Graduate Program on
Computing, Organizations, Policy, and Society
at the University of California, Irvine
This interdisciplinary program at the University of California,
Irvine provides an opportunity for scholars and students to
investigate the social dimensions of computerization in a setting
which supports reflective and sustained inquiry.
The primary educational opportunities are a PhD programs in the
Department of Information and Computer Science (ICS) and MS and PhD
programs in the Graduate School of Management (GSM). Students in each
program can specialize in studying the social dimensions of computing.
Several students have recieved graduate degrees from ICS and GSM for
studying topics in the CORPS program.
The faculty at Irvine have been active in this area, with many
interdisciplinary projects, since the early 1970's. The faculty and
students in the CORPS program have approached them with methods drawn
from the social sciences.
The CORPS program focuses upon four related areas of inquiry:
1. Examining the social consequences of different kinds of
computerization on social life in organizations and in the larger
society.
2. Examining the social dimensions of the work and industrial worlds
in which computer technologies are developed, marketed,
disseminated, deployed, and sustained.
3. Evaluating the effectiveness of strategies for managing the
deployment and use of computer-based technologies.
4. Evaluating and proposing public policies which facilitate the
development and use of computing in pro-social ways.
Studies of these questions have focussed on complex information
systems, computer-based modelling, decision-support systems, the
myriad forms of office automation, electronic funds transfer systems,
expert systems, instructional computing, personal computers, automated
command and control systems, and computing at home. The questions
vary from study to study. They have included questions about the
effectiveness of these technologies, effective ways to manage them,
the social choices that they open or close off, the kind of social and
cultural life that develops around them, their political consequences,
and their social carrying costs.
The CORPS program at Irvine has a distinctive orientation -
(i) in focussing on both public and private sectors,
(ii) in examining computerization in public life as well as within
organizations,
(iii) by examining advanced and common computer-based technologies "in
vivo" in ordinary settings, and
(iv) by employing analytical methods drawn from the social sciences.
Organizational Arrangements and Admissions for CORPS
The primary faculty in the CORPS program hold appointments in the
Department of Information and Computer Science and the Graduate School
of Management. Additional faculty in the School of Social Sciences,
and the Program on Social Ecology, have collaborated in research or
have taught key courses for students in the CORPS program. Research
is administered through an interdisciplinary research institute at UCI
which is part of the Graduate Division, the Public Policy Research
Organization.
Students who wish additional information about the CORPS program
should write to:
Professor Rob Kling (Kling.uci-20b@rand-relay)
Department of Information and Computer Science
University of California, Irvine
Irvine, Ca. 92717
or to:
Professor Kenneth Kraemer
Graduate School of Management
University of California, Irvine
Irvine, Ca. 92717
------------------------------
End of AIList Digest
********************
∂28-Nov-83 1356 ELYSE@SU-SCORE.ARPA Faculty Meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 28 Nov 83 13:55:57 PST
Date: Mon 28 Nov 83 13:53:14-PST
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Faculty Meeting
To: Faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
There will be a Faculty Meeting on Tuesday, Jan. 10, from 2:30 - 4:00 pm. It
will be held in the Boystown conference room. Please put this on your calendar.
the faculty lunches will end, for this quarter, with the lunch on Dec. 13. The
lunches will resume on Jan. 10.
-------
∂28-Nov-83 1405 @MIT-MC:GAVAN%MIT-OZ@MIT-MC Autopoiesis and Self-Referential Systems
Received: from MIT-MC by SU-AI with TCP/SMTP; 28 Nov 83 14:05:36 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 28 Nov 83 16:41-EST
Date: Mon, 28 Nov 1983 16:36 EST
Message-ID: <GAVAN.11971303982.BABYL@MIT-OZ>
From: GAVAN%MIT-OZ@MIT-MC.ARPA
To: crummer@λAEROSPACE (Charlie Crummer)λ
Cc: PHIL-SCI@MIT-MC
Subject: Autopoiesis and Self-Referential Systems
In-reply-to: Msg of 28 Nov 1983 14:37-EST from crummer at AEROSPACE (Charlie Crummer)
From: crummer at AEROSPACE (Charlie Crummer)
I have been reading lately about so-called "autopoietic" system, i.e.
systems which produce themselves (they may also reproduce themselves but
that is something else). The concept comes fr the biologists Humberto
Maturana, Fransisco Varela, and other. An example of an autopoietic system
is a living cell. It establishes and maintains its own integrity
from within. This is an interesting concept and may have use in
describing political and other organizational systems.
Maturana used to claim that autopoietic systems are "closed", that is,
(according to standard biological usage promulgated by von
Bertalanffy) they do not exchange matter and energy with their
environments. After hearing numerous disputes on this question at
conferences (my sources tell me), Maturana backed down. Autopoietic
systems are relatively closed, but certainly not completely. As
biological, living systems they are open. They exchange matter and
energy with their enviroments. An autopoietic system is certainly
a system that reproduces itself, but I doubt that it PRODUCES itself.
Do Maturana or Varela claim this? I've never read any such claim.
As for the utility of self-reproduction in describing political and
other organizational systems, yes, there is interest in the concept
among some social scientists. Few, if any, of them would maintain
that any organization or state is a closed system, however. They
speak instead of the RELATIVE autonomy of the state, not complete
autonomy. In other words, there is certainly some amount of system
maintenance from within, but organizations are also susceptible to
(and responsive to) environmental pressures.
The desire to show that a system (ANY system) is completely autonomous
is, in my view, just another attempt to revive the rationalist dogma
of the middle ages. Undoubtedly the best attempt was made by Kant
in *The Critique of Pure Reason*, but in order to do so he was
forced to posit a dualism (noumena vs. phenomena) that he already
knew (from his studies of Leibniz) was untenable. According to
Weldon's critique of The Critique (Oxford University Press, in the
1950s or 60s), Kant had been influenced by Locke's student Tetens.
See also P. F. Strawson's critique of Kant, *The Bounds of Sense*.
∂28-Nov-83 1441 TAJNAI@SU-SCORE.ARPA LOTS OF FOOD at IBM Reception
Received: from SU-SCORE by SU-AI with TCP/SMTP; 28 Nov 83 14:41:16 PST
Date: Mon 28 Nov 83 14:27:05-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: LOTS OF FOOD at IBM Reception
To: research-associates@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA,
students@SU-SCORE.ARPA, secretaries@SU-SCORE.ARPA, bureaucrat@SU-SCORE.ARPA
Dr. Sadagopan and Brent Hailpern asked me to arrange the IBM reception
for CSD, CSL and CIS.
I have ordered wine, beer (Henry's), assorted soft drinks, apple juice
and lots of food. Food will be replenished at 5:30 for those who
can't make it at 4:30.
Wednesday, Nov. 30
4:30 to 6:30
Tresidder 281/282
Carolyn
-------
∂28-Nov-83 1450 SCHMIDT@SUMEX-AIM.ARPA Symbolics Christmas gathering
Received: from SUMEX-AIM by SU-AI with TCP/SMTP; 28 Nov 83 14:49:49 PST
Date: Mon 28 Nov 83 14:48:13-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: Symbolics Christmas gathering
To: HPP-Lisp-Machines@SUMEX-AIM.ARPA
I have received from Rick Dukes (Symbolics) a cheery little
card bearing the visages of what appear to be 9 of Santa's helpers
inviting lisp machine users to a get-together to be held on December
21. He notes that all are welcome. The text of the card is
reproduced below. --Christopher
SYMBOLICS INC.
would like to invite you
to their
CHRISTMAS GATHERING
in our Palo Alto office
located at
845 Page Mill Road
Palo Alto, CA 94304
on Wednesday
December 21, 1983
at 1:00 P. M.
RSVP: December 7, 1983
---- (Leslie or Denise)
415/494-8081
-------
∂28-Nov-83 1511 RPERRAULT@SRI-AI.ARPA meeting this week
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Nov 83 15:10:53 PST
Date: Mon 28 Nov 83 11:54:41-PST
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: meeting this week
To: csli-b3@SRI-AI.ARPA, csli-b5@SRI-AI.ARPA
cc: rperrault@SRI-AI.ARPA, csli-folks@SRI-AI.ARPA
At this week's joint meeting of B3 and B5, Geoff Nunberg will discuss his
paper "Individuation in context". Phil Cohen will talk about indirect
speech acts on Dec. 7.
Wednesday, November 30, 9 am, in Ventura.
Ray
-------
∂28-Nov-83 1605 ELYSE@SU-SCORE.ARPA Faculty Meeting\
Received: from SU-SCORE by SU-AI with TCP/SMTP; 28 Nov 83 16:04:49 PST
Date: Mon 28 Nov 83 16:04:16-PST
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Faculty Meeting\
To: CSD-Tenured-Faculty: ;
Stanford-Phone: (415) 497-9746
There will be a Tenured Faculty Meeting on Dec. 6, Tuesday, from 2:30 to 4 in
MJH 252.
-------
∂28-Nov-83 1801 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 28 Nov 83 18:01:41 PST
Date: Mon 28 Nov 83 18:00:59-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
N E X T A F L B T A L K (S)
Busy week ahead:
Besides the regular Thursday talk, there will be an extra AFLB,
Friday, Dec. 2, at 2:30, in MJH301.
For those of you who manifested an interest in Maple: I'll talk about
formal manipulation systems, as a guest lecturer in CS155, Friday,
Dec. 2, in ERL 320, at 1:15. Part of the talk will be an introduction
to Maple and Macsyma. Auditors are welcome.
Now for the bad news: We don't have a speaker for Dec. 9. What about
YOU???
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
12/1/83 - Dr. Leo Guibas (Xerox - PARC)
"Optimal Point Location in a Monotone Subdivision"
Point location, often known in graphics as "hit detection", is one of
the fundamental problems of computational geometry. In a point
location query we want to identify which of a given collection of
geometric objects contains a particular point.
Let S denote a subdivision of the Euclidean plane into monotone regions
by a straight-line graph of m edges. In this talk we exhibit a
substantial refinement of the technique of Lee and Preparata for
locating a point in S based on separating chains. The new data
structure, called a layered dag, can be built in O(m) time, uses O(m)
storage, and makes possible point location in O(log m) time. Unlike
previous structures that attain these optimal bounds, the layered dag
can be implemented in a simple and practical way, and is extensible to
subdivisions with edges more general than straight-line segments. This
is joint work with Herbert Edelsbrunner and Jorge Stolfi
******** Time and place: Dec. 1, 12:30 pm in MJ352 (Bldg. 460) *******
Special AFLB talk:
12/2/83 Prof. Eli Shamir (Hebrew University):
Parallel algorithms for factorizations problems of GF(q) polynomials
or
How to cope with Euclid's GCD
It is not known whether gcd(f,g) for polynomials of degree <= n can be
parallelized with O(n) processors. However, if the polynomials are
over finite fields, factorization related algorithms admit an optimal
speedup, by a proper scheduling of the calls to the gcd subroutine.
Along the way we derive in an elementary way the distribution of the
lowest factor degree in a random polynomial.
******** Time and place: Dec. 2, 2:30 pm in MJ301 (Bldg. 460) *******
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: CSD,
Margaret Jacks Hall 325, (415) 497-1787) Contributions are wanted and
welcome. Not all time slots for the autumn quarter have been filled
so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂29-Nov-83 0155 LAWS@SRI-AI.ARPA AIList Digest V1 #105
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Nov 83 01:55:06 PST
Date: Mon 28 Nov 1983 22:36-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #105
To: AIList@SRI-AI
AIList Digest Tuesday, 29 Nov 1983 Volume 1 : Issue 105
Today's Topics:
AI - Challenge & Responses & Query
----------------------------------------------------------------------
Date: 21 Nov 1983 12:25-PST
From: dietz%usc-cse%USC-ECL@SRI-NIC
Reply-to: dietz%USC-ECL@SRI-NIC
Subject: Re: The AI Challenge
I too am skeptical about expert systems. Their attraction seems to be
as a kind of intellectual dustbin into which difficulties can be swept.
Have a hard problem that you don't know (or that no one knows) how to
solve? Build an expert system for it.
Ken Laws' idea of an expert system as a very modular, hackable program
is interesting. A theory or methodology on how to hack programs would
be interesting and useful, but would become another AI spinoff, I fear.
------------------------------
Date: Wed 23 Nov 83 18:02:11-PST
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: response to response to challenge
Tom,
I thought you made some good points in your response to Ralph
Johnson in the AIList, but one of your claims is unsupported, important,
and quite possibly wrong. The claim I refer to is
"Expert systems can be built, debugged, and maintained more cheaply
than other complicated systems. And hence, they can be targeted at
applications for which previous technology was barely adequate."
I would be delighted if this could be shown to be true, because I
would very much like to show friends/clients in industry how to use AI to
solve their problems more cheaply.
However, there are no formal studies that compare a system built
using AI methods to one built using other methods, and no studies that have
attempted to control for other causes of differences in ease of building,
debugging, maintaining, etc. such as differences in programmer experience,
programming language, use or otherwise of structured programming techniques,
etc..
Given the lack of controlled, reproducible tests of the effectiveness
of AI methods for program development, we have fallen back on qualitative,
intuitive arguments. The same sort of arguments have been and are made for
structured programming, application generators, fourth-generation languages,
high-level languages, and ADA. While there is some truth in the various
claims about improved programmer productivity they have too often been
overblown as The Solution To All Our Problems. This is the case with
claiming AI is cheaper than any other methods.
A much more reasonable statement is that AI methods may turn out
to be cheaper / faster / otherwise better than other methods if anyone ever
actually builds an effective and economically viable expert system.
My own guess is that it is easier to develop AI systems because we
have been working in a LISP programming environment that has provided tools
like interpreted code, interactive debugging/tracing/editing, masterscope
analysis, etc.. These points were made quite nicely in Beau Shiel's recent
article in Datamation (Power Tools for Programming, I think was the title).
None of these are intrinsic to AI.
Many military and industry managers who are supporting AI work are
going to be very disillusioned in a few years when AI doesn't deliver what
has been promised. Unsupported claims about the efficacy of AI aren't going
to help. It could hurt our credibility, and thereby our funding and ability
to continue the basic research.
Mike Walker
WALKER@SUMEX-AIM.ARPA
------------------------------
Date: Fri 25 Nov 83 17:40:44-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: response to response to challenge
Mike,
While I would certainly welcome the kinds of controlled studies that
you sketched in your msg, I think my claim is correct and can be
supported. Virtually every expert system that has been built has been
targeted at tasks that were previously untouched by computing
technology. I claim that the reason for this is that the proper
programming methodology was needed before these tasks could be
addressed. I think the key parts of that methodology are (a) a
modular, explicit representation of knowledge, (b) careful separation
of this knowledge from the inference engine, and (c) an
expert-centered approach in which extensive interviews with experts
replace attempts by computer people to impose a normative,
mathematical theory on the domain.
Since there are virtually no cases where expert systems and
"traditional" systems have been built to perform the same task, it is
difficult to support this claim. If we look at the history of
computers in medicine, however, I think it supports my claim.
Before expert systems techniques were available, many people
had attempted to build computational tools for physicians. But these
tools suffered from the fact that they were often burdened with
normative theories and often ignored the clinical aspects of disease
diagnosis. I blame these deficiencies on the lack of an
"expert-centered" approach. These programs were also difficult to
maintain and could not produce explanations because they did not
separate domain knowledge from the inference engine.
I did not claim anywhere in my msg that expert systems techniques are
"The Solution to All Our Problems". Certainly there are problems for
which knowledge programming techniques are superior. But there are
many more for which they are too expensive, too slow, or simply
inappropriate. It would be absurd to write an operating system in
EMYCIN, for example! The programming advances that would allow
operating systems to be written and debugged easily are still
undiscovered.
You credit fancy LISP environments for making expert systems easy to
write, debug, and maintain. I would certainly agree: The development
of good systems for symbolic computing was an essential prerequisite.
However, the level of program description and interpretation in EMYCIN
is much higher than that provided by the Interlisp system. And the
"expert-centered" approach was not developed until Ted Shortliffe's
dissertation.
You make a very good point in your last paragraph:
Many military and industry managers who are supporting AI work
are going to be very disillusioned in a few years when AI
doesn't deliver what has been promised. Unsupported claims
about the efficacy of AI aren't going to help. It could hurt
our credibility, and thereby our funding and ability to
continue the basic research.
AI (at least in Japan) has "promised" speech understanding, language
translation, etc. all under the rubric of "knowledge-based systems".
Existing expert-systems techniques cannot solve these problems. We
need much more research to determine what things CAN be accomplished
with existing technology. And we need much more research to continue
the development of the technology. (I think these are much more
important research topics than comparative studies of expert-systems
technology vs. other programming techniques.)
But there is no point in minimizing our successes. My original
message was in response to an accusation that AI had no merit.
I chose what I thought was AI's most solid contribution: an improved
programming methodology for a certain class of problems.
--Tom
------------------------------
Date: Fri 25 Nov 83 17:52:47-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Clarifying my "AI Challange"
Although I've written three messages on this topic already, I guess
I've never really addressed Ralph Johnson's main question:
My question, though, is whether AI is really going to change
the world any more than the rest of computer science is
already doing. Are the great promises of AI going to be
fulfilled?
My answer: I don't know. I view "the great promises" as goals, not
promises. If you are a physicalist and believe that human beings are
merely complex machines, then AI should in principle succeed.
However, I don't know if present AI approaches will turn out to be
successful. Who knows? Maybe the human brain is too complex to ever
be understood by the human brain. That would be interesting to
demonstrate!
--Tom
------------------------------
Date: 24 Nov 83 5:00:32-PST (Thu)
From: pur-ee!uiucdcs!smu!leff @ Ucb-Vax
Subject: Re: The AI Challenge - (nf)
Article-I.D.: uiucdcs.4118
There was a recent discussion of an AI project that was done at
ONR on determining the cause of a chemical spill in a large chemical
plant with various ducts and pipes and manholes, etc. I argued that
the thing was just an application of graph algorithms and searching
techniques.
(That project was what could be done in three days by an AI team as
part of a challenge from ONR and quite possibly is not representative.)
Theorem proving using resolution is something that someone with just
a normal algorithms background would not simply come up with 'as an
application of normal algorithms.' Using if-then rules perhaps might
be a search of the type you might see an algorithms book. Although, I
don't expect the average CS person with a background in algorithms to
come up with that application although once it was pointed out it would
be quite intuitive.
One interesting note is that although most of the AI stuff is done in
LISP, a big theorem proving program discussed by Wos at a recent IEEE
meeting here was written in PASCAL. It did some very interesting things.
One point that was made is that they submitted a paper to a logic journal.
Although the journal agreed the results were worth publishing, the "computer
stuff" had to go.
Continuing on this rambling aside, some people submitted results in
mechanical engineering using a symbolic manipulator referencing the use
of the program in a footnote. The poor referee conscientiously
tried to duplicate the derivations manually. Finally he noticed the
reference and sent a letter back saying that they must put symbolic
manipulation by computer in the covering.
Getting back to the original subject, I had a discussion with someone
doing research in daemons. After he explained to me what daemons were,
I came to the conclusion they were a fancy name for what you described
as a hack. A straightforward application of theorem proving or if-then
rule techniques would be inefficient or otherwise infeasable so one
puts an exception in to handle a certain kind of a case. What is the
difference between that an error handler for zero divides rather than
putting a statement everywhere one does a division?
Along the subject of hacking, a DATAMATION article on 'Real Programmers
Don't Use PASCAL.' in which he complained about the demise of the person
who would modify a program on the fly using the switch register, etc.
He remarkeed at the end that some of the debugging techniques in
LISP AI environments were starting to look like the old style techniques
of assembler hackers.
------------------------------
Date: 24 Nov 83 22:29:44-PST (Thu)
From: pur-ee!notes @ Ucb-Vax
Subject: Re: The AI Challenge - (nf)
Article-I.D.: pur-ee.1148
As an aside to this discussion, I'm curious as to just what everyone
thinks of when they think of AI.
I am a student at Purdue, which has absolutely nothing in the way of
courses on what *I* consider AI. I have done a little bit of reading
on natural language processing, but other than that, I haven't had
much of anything in the way of instruction on this stuff, so maybe I'm
way off base here, but when I think of AI, I primarily think of:
1) Natural Language Processing, first and foremost. In
this, I include being able to "read" it and understand
it, along with being able to "speak" it.
2) Computers "knowing" things - i.e., stuff along the
lines of the famous "blocks world", where the "computer"
has notions of pyramids, boxes, etc.
3) Computers/programs which can pass the Turing test (I've
always thought that ELIZA sort of passes this test, at
least in the sense that lots of people actually think
the computer understood their problems).
4) Learning programs, like the tic-tac-toe programs that
remember that "that" didn't work out, only on a much
more grandiose scale.
5) Speech recognition and understanding (see #1).
For some reason, I don't think of pattern recognition (like analyzing
satellite data) as AI. After all, it seems to me that this stuff is
mostly just "if <cond 1> it's trees, if <cond 2> it's a road, etc.",
which doesn't really seem like "intelligence".
[If it were that easy, I'd be out of a job. -- KIL]
What do you think of when I say "Artificial Intelligence"? Note that
I'm NOT asking for a definition of AI, I don't think there is one. I
just want to know what you consider AI, and what you consider "other"
stuff.
Another question -- assuming the (very) hypothetical situation where
computers and their programs could be made to be "infinitely" intelligent,
what is your "dream program" that you'd love to see written, even though
it realistically will probably never be possible? Jokingly, I've always
said that my dream is to write a "compiler that does what I meant, not
what I said".
--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue
------------------------------
End of AIList Digest
********************
∂29-Nov-83 0830 EMMA@SRI-AI.ARPA recycling bin
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Nov 83 08:29:53 PST
Date: Tue 29 Nov 83 08:31:02-PST
From: EMMA@SRI-AI.ARPA
Subject: recycling bin
To: csli-folks@SRI-AI.ARPA
The paper recycling bin has been moved to room 7 (mail room),
behind the door. Please use it for recycling plain paper including
computer paper and copying paper (blank copy paper can be reused in
the machine so don't recycle it). Do not use the bin as a trash
can or recycle glossy paper.
I would like comments on whether we should recycle glass and
aluminum.
Thank you,
Emma
-------
∂29-Nov-83 1122 GOLUB@SU-SCORE.ARPA lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 Nov 83 11:22:40 PST
Date: Tue 29 Nov 83 11:21:36-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: lunch
To: faculty@SU-SCORE.ARPA
I don't have much on the plate today so to speak. I would like to discuss with
a proposition of IBM. Do you have any comments? GENE_
-------
∂29-Nov-83 1128 GOLUB@SU-SCORE.ARPA IBM message
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 Nov 83 11:28:02 PST
Date: Tue 29 Nov 83 11:22:33-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: IBM message
To: faculty@SU-SCORE.ARPA
This is the message from IBM.
23-Nov-83 14:09:24-PST,2352;000000000005
Return-Path: <GP1.YKTVMT.IBM-SJ@Rand-Relay>
Received: from rand-relay.ARPA by SU-SCORE.ARPA with TCP; Wed 23 Nov 83 13:47:05-PST
Date: 23 Nov 1983 10:30:05-EST (Wednesday)
From: George Paul <GP1.YKTVMT.IBM@Rand-Relay>
Return-Path: <GP1.YKTVMT.IBM-SJ@Rand-Relay>
Subject: Meeting at Stanford
To: Golub@SU-SCORE
Via: IBM-SJ; 23 Nov 83 13:16-PST
Gene, thanks for the updated list.
With regard to the meeting I suggested during our conversation at Norfolk,
let me re-iterate. The purpose of the meeting would be to provide the
opportunity for our departments to get to know each other better and to
explore areas of possible cooperative research either formally or informally.
Toward this goal, I would like to select with you, say four topics of mutual
research interest to Stanford and to us. These topics would then serve as the
basis for a "mini-symposium" in which selected speakers from your faculty and
graduate students, and research staff members from Yorktown and San Jose would
present technical papers on their current research. A symposium along these
lines could I believe be scheduled for a day and a half or two days in duration.
We would like the program to be relaxed and informal, allowing plenty of time
for discussions. IBM would host an evening banquet either prior to the
first day or between the first and second days for people to become better
acquainted.
In addition to the speakers, I would invite appropriate management from
Yorktown, San Jose and the Palo Alto Scientific Center, including Herb Schorr
and the directors from our department. I would like to schedule the meeting
for some time early next year, convenient to your class schedules and
during which Herb is available. The meeting could be held on-site at Stanford
if it is convenient, or I can arrange for appropriate facilities in the area.
We would also like to invite faculty from CSL and CIS. I have spoken to
Mike Flynn about this possibility previously.
Areas of interest to us would include: computer architecture and organization
including parallel processing, workstations and local-area networks, VLSI and
design automation, AI and expert systems, programming technology, etc.
Please let me know your thoughts regarding meeting, subjects of interest to
you and possible dates.
George
-------
∂29-Nov-83 1251 @MIT-MC:DAM%MIT-OZ@MIT-MC Model Theoretic Ontologies
Received: from MIT-MC by SU-AI with TCP/SMTP; 29 Nov 83 12:51:07 PST
Date: Tue, 29 Nov 1983 15:43 EST
Message-ID: <DAM.11971556410.BABYL@MIT-OZ>
From: DAM%MIT-OZ@MIT-MC.ARPA
To: BATALI%MIT-OZ@MIT-MC.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
Subject: Model Theoretic Ontologies
Date: Monday, 28 November 1983, 12:00-EST
From: John Batali <Batali>
To do programming, there must be some notion of "process" in the
ontology, some idea of things happening in some temporal relation. ...
The point is that a good representation language has to be more than
just "logic." ... Logic alone is inadequate, the argument goes
because it, by itself, presupposes only the assumption that "the true"
exists.
Whether or not "logic" in itself has a rich ontology depends
on what one means by "logic". I take "a logic" to consist of two
things: a set of models and a set of propositions, where each
proposition is associated with a truth function on models. There are
lots of different logics studied by logicians these days and most of
them are defined in semantically, i.e. each proposition is associated
with a truth function on a set of models. The things which count as
models vary with the logic. For example the models can be as simple
as unsrtructured sets (in sentential calculus) or as complex as a
collection of possible worlds with an algebra of accessibility
relations defined on them (dynamic logic). Under this (modern) notion
of "logic" the logic itself does come with a rich ontology. Though
I think new logics are needed with even richer ontologies.
David Mc
∂29-Nov-83 1315 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Dec. 1
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Nov 83 13:15:36 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Tue 29 Nov 83 13:10:46-PST
Date: Tue, 29 Nov 83 13:08 PST
From: desRivieres.PA@PARC-MAXC.ARPA
Subject: CSLI Activities for Thursday Dec. 1
To: csli-friends@SRI-AI.ARPA
Reply-to: desRivieres.PA@PARC-MAXC.ARPA
CSLI SCHEDULE FOR THURSDAY, DECEMBER 1st, 1983
10:00 Research Seminar on Natural Language
Speaker: Paul Kiparsky (MIT)
Topic: On lexical phonology and morphology.
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Paul Martin (SRI)
Paper for discussion: "Planning English Referring Expressions"
by Douglas Appelt
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Luca Cardelli (Bell Labs)
Title: "Type Systems in Programming Languages"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Charles Bigelow (CS, Stanford)
Title: "Selected Problems in Visible Language"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available
in a lot located just off Campus Drive, across from the construction site.
∂29-Nov-83 1344 GROSZ@SRI-AI.ARPA important meeting Thursday at 1
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Nov 83 13:44:21 PST
Date: Tue 29 Nov 83 13:40:18-PST
From: Barbara J. Grosz <GROSZ@SRI-AI.ARPA>
Subject: important meeting Thursday at 1
To: csli-principals@SRI-AI.ARPA
There will be a meeting THIS THURSDAY (December 1) at 1 P.M. (location
uncertain) to discuss the current Areas A and B budgets and the
problem(s) of funding for the whole gamut of natural-language research
activities at CSLI. The main goals of this meeting are to state the
problem(s) and to begin (everyone concerned) thinking about the range
of possible solutions. If we can, we will send out a message with more
details late Tuesday; at least we'll let you know where the meeting
will be.
It is important for everyone with an interest in A&B to attend. We
apologize for the late notice.
Thanks
Stanley and Barbara
-------
∂29-Nov-83 1434 RIGGS@SRI-AI.ARPA Dec. 1 A and B Project Meeting Time
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Nov 83 14:34:27 PST
Date: Tue 29 Nov 83 14:35:23-PST
From: RIGGS@SRI-AI.ARPA
Subject: Dec. 1 A and B Project Meeting Time
To: CSLI-principals@SRI-AI.ARPA
cc: RIGGS@SRI-AI.ARPA,
: ;
The meeting for the A and B area people will be held from 1:00
to 2:00 p.m., Thursday, Dec. 1 in the Ventura Conference Room
after TIN-Lunch. This meeting is the one referred to by Barbara Grosz
and Stanley Peters.
-------
∂29-Nov-83 1603 ELYSE@SU-SCORE.ARPA Reminder
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 Nov 83 16:03:47 PST
Date: Tue 29 Nov 83 16:02:23-PST
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Reminder
To: faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
On November 18 I sent round a memo from Gene on full disclosure on consulting.
Many of you have not sent this in to me yet. I urge you to complete this and
return it to me as soon as possible. Thank you for your cooperation on this.
Elyse.
-------
∂29-Nov-83 1837 LAWS@SRI-AI.ARPA AIList Digest V1 #106
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Nov 83 18:36:23 PST
Date: Tue 29 Nov 1983 12:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #106
To: AIList@SRI-AI
AIList Digest Wednesday, 30 Nov 1983 Volume 1 : Issue 106
Today's Topics:
Conference - Logic Conference Correction,
Intelligence - Definitions,
AI - Definitions & Research Methodology & Jargon,
Seminar - Naive Physics
----------------------------------------------------------------------
Date: Mon 28 Nov 83 22:32:29-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Correction
The ARPANET address in the announcement of the IEEE 1984 Logic Programming
Symposium should be PEREIRA@SRI-AI, not PERIERA@SRI-AI.
Fernando Pereira
[My apologies. I am the one who inserted Dr. Pereira's name incorrectly.
I was attempting to insert information from another version of the same
announcement that also reached the AIList mailbox. -- KIL]
------------------------------
Date: 21 Nov 83 6:04:05-PST (Mon)
From: decvax!mcvax!enea!ttds!alf @ Ucb-Vax
Subject: Re: Behavioristic definition of intelligence
Article-I.D.: ttds.137
Doesn't the concept "intelligence" have some characteristics in common with
a concept such as "traffic"? It seems obvious that one can measure such
entities as "traffic intensity" and the like thereby gaining an indirect
understanding of the conditions that determine the "traffic" but it seems
very difficult to find a direct measure of "traffic" as such. Some may say
that "traffic" and "traffic intensity" are synonymous concepts but I don't
agree. The common opinion among psychologists seems to be that
"intelligence" is that which is measured by an intelligence test. By
measuring a set of problem solving skills and weighing the results together
we get a value. Why not call it "intelligence" ? The measure could be
applicable to machine intelligence also as soon as (if ever) we teach the
machines to pass intelligence tests. It should be quite clear that
"intelligence" is not the same as "humanness" which is measured by a Turing
test.
------------------------------
Date: Sat, 26 Nov 83 2:09:14 EST
From: A B Cooper III <abc@brl-bmd>
Subject: Where wise men fear to tread
Being nothing more than an amateur observer on the AI scene,
I hesitate to plunge in like a fool.
Nevertheless, the roundtable on what constitutes intelligence
seems ed to cover many interesting hypotheses:
survivability
speed of solving problems
etc
but one. Being married to a professional educator, I've found
that the common working definition of intelligence is
the ability TO LEARN.
The more easily one learns new material, the
more intelligent one is said to be.
The more quickly one learns new material,
the more intelligent one is said to be.
One who can learn easily and quickly across a
broad spectrum of subjects is said to
be more intelligent than one whose
abilities are concentrated in one or
two areas.
One who learns only at an average rate, except
for one subject area in which he or she
excells far above the norms is thought
to be TALENTED rather than INTELLIGENT.
It seems to be believed that the most intelligent
folks learn easily and rapidly without
regard to the level of material. They
assimilate the difficult with the easy.
Since this discussion was motivated, at least in part, by the
desire to understand what an "intelligent" computer program should
do, I feel that we should re-visit some of our terminology.
In the earlier days of Computer Science, I seem to recall some
excitement about machines (computers) that could LEARN. Was this
the precursor of AI? I don't know.
If we build an EXPERT SYSTEM, have we built an intelligent machine
(can it assimilate new knowledge easily and quickly), or have we
produced a "dumb" expert? Indeed, aren't many of our AI or
knowledge-based or expert systems really something like "dumb"
experts?
------------------------
You might find the following interesting:
Siegler, Robert S, "How Knowledge Influences Learning,"
AMERICAN SCIENTIST, v71, Nov-Dec 1983.
In this reference, Siegler addresses the questions of how
children learn and what they know. He points out that
the main criticism of intelligence tests (that they measure
'knowledge' and not 'aptitute') may miss the mark--that
knowledge and learning may be linked, in humans anyway, in
ways that traditional views have not considered.
-------------------------
In any case, should we not be addressing as a primary research
objective, how to make our 'expert systems' into better learners?
Brint Cooper
abc@brl.arpa
------------------------------
Date: 23 Nov 83 11:27:34-PST (Wed)
From: dambrosi @ Ucb-Vax
Subject: Re: Intelligence
Article-I.D.: ucbvax.373
Hume once said that when a discussion or argument seems to be
interminable and without discernable progress, it is worthwhile
to attempt to produce a concrete visualisation of the concept
being argued about. Often, he claimed, this will be IMPOSSIBLE
to do, and this will be evidence that the word being argued
about is a ringer, and the discussion pointless. In more
modern parlance, these concepts are definitionally empty
for most of us.
I submit the following definition as the best presently available:
Intelligence consists of perception of the external environment
(e.g. vision), knowledge representation, problem solving, learning,
interaction with the external environment (e.g. robotics),
and communication with other intelligent agents (e.g. natural
language understanding). (note the conjunctive connector)
If you can't guess where this comes from, check AAAI83
procedings table of contents.
bruce d'ambrosio
dambrosi%ucbernie@berkeley
------------------------------
Date: Tuesday, 29 Nov 1983 11:43-PST
From: narain@rand-unix
Subject: Re: AI Challenge
AI is advanced programming.
We need to solve complex problems involving reasoning, and judgment. So
we develop appropriate computer techniques (mainly software)
for that. It is our responsibility to invent techniques that make development
of efficient intelligent computer programs easier, debuggable, extendable,
modifiable. For this purpose it is only useful to learn whatever we can from
traditional computer science and apply it to the AI effort.
Tom Dietterich said:
>> Your view of "knowledge representations" as being identical with data
>> structures reveals a fundamental misunderstanding of the knowledge vs.
>> algorithms point. Most AI programs employ very simple data structures
>> (e.g., record structures, graphs, trees). Why, I'll bet there's not a
>> single AI program that uses leftist-trees or binomial queues! But, it
>> is the WAY that these data structures are employed that counts.
We at Rand have ROSS (Rule Oriented Simulation System) that has been employed
very successfully for developing two large scale simulations (one strategic
and one tactical). One implementation of ROSS uses leftist trees for
maintaining event queues. Since these queues are in the innermost loop
of ROSS's operation, it was only sensible to make them as efficient as
possible. We think we are doing AI.
Sanjai Narain
Rand Corp.
------------------------------
Date: Tue, 29 Nov 83 11:31:54 PST
From: Michael Dyer <dyer@UCLA-CS>
Subject: defining AI, AI research methodology, jargon in AI (long msg)
This is in three flaming parts: (I'll probably never get up the steam to
respond again, so I'd better get it all out at once.)
Part I. "Defining intelligence", "defining AI" and/or "responding to AI
challenges" considered harmful: (enough!)
Recently, I've started avoiding/ignoring AIList since, for the most
part, it's been a endless discussion on "defining A/I" (or, most
recently) defending AI. If I spent my time trying to "define/defend"
AI or intelligence, I'd get nothing done. Instead, I spend my time
trying to figure out how to get computers to achieve some task -- exhibit
some behavior -- which might be called intelligent or human-like.
If/whenever I'm partially successful, I try to keep track about what's
systematic or insightful. Both failure points and partial success
points serve as guides for future directions. I don't spend my time
trying to "define" intelligence by BS-ing about it. The ENTIRE
enterprise of AI is the attempt to define intelligence.
Here's a positive suggestion for all you AIList-ers out there:
I'd be nice to see more discussion of SPECIFIC programs/cognitive
models: their Assumptions, their failures, ways to patch them, etc. --
along with contentful/critical/useful suggestions/reactions.
Personally, I find Prolog Digest much more worthwhile. The discussions
are sometimes low level, but they almost always address specific issues,
with people often offering specific problems, code, algorithms, and
analyses of them. I'm afraid AIList has been taken over by people who
spend so much time exchanging philosophical discussions that they've
chased away others who are very busy doing research and have a low BS
tolerance level.
Of course, if the BS is reduced, that means that the real AI world will
have to make up the slack. But a less frequent digest with real content
would be a big improvement. {This won't make me popular, but perhaps part
of the problem is that most of the contributors seem to be people who
are not actually doing AI, but who are just vaguely interested in it, so
their speculations are ill-informed and indulgent. There is a use for
this kind of thing, but an AI digest should really be discussing
research issues. This gets back to the original problem with this
digest -- i.e. that researchers are not using it to address specific
research issues which arise in their work.}
Anyway, here are some examples of task/domains topic that could be
addressed. Each can be considered to be of the form: "How could we get
a computer to do X":
Model Dear Abby.
Understand/engage in an argument.
Read an editorial and summarize/answer questions about it.
Build a daydreamer
Give legal advice.
Write a science fiction short story
...
{I'm an NLP/Cognitive modeling person -- that's why my list may look
bizzare to some people}
You researchers in robotics/vision/etc. could discuss, say, how to build
a robot that can:
climb stairs
...
recognize a moving object
...
etc.
People who participate in this digest are urged to: (1) select a
task/domain, (2) propose a SPECIFIC example which represents
PROTOTYPICAL problems in that task/domain, (3) explain (if needed) why
that specific example is prototypic of a class of problems, (4) propose
a (most likely partial) solution (with code, if at that stage), and 4)
solicit contentful, critical, useful, helpful reactions.
This is the way Prolog.digest is currently functioning, except at the
programming language level. AIList could serve a useful purpose if it
were composed of ongoing research discussions about SPECIFIC, EXEMPLARY
problems, along with approaches, their limitations, etc.
If people don't think a particular problem is the right one, then they
could argue about THAT. Either way, it would elevate the level of
discussion. Most of my students tell me that they no longer read
AIList. They're turned off by the constant attempts to "defend or
define AI".
Part II. Reply to R-Johnson
Some of R-Johnson's criticisms of AI seem to stem from viewing
AI strictly as a TOOLS-oriented science.
{I prefer to refer to STRUCTURE-oriented work (ie content-free) as
TOOLS-oriented work and CONTENT-oriented work as DOMAIN or
PROCESS-oriented. I'm referring to the distinction that was brought up
by Schank in "The Great Debate" with McCarthy at AAAI-83 Wash DC).
In general, tools-oriented work seems more popular and accepted
than content/domain-oriented work. I think this is because:
1. Tools are domain independent, so everyone can talk about them
without having to know a specific domain -- kind of like bathroom
humor being more universally communicable than topical-political
humor.
2. Tools have nice properties: they're general (see #1 above);
they have weak semantics (e.g. 1st order logic, lambda-calculus)
so they're clean and relatively easy to understand.
3. No one who works on a tool need be worried about being accused
of "ad hocness".
4. Breakthroughs in tools-research happen rarely, but when it
does, the people associated with the breakthrough become
instantly famous because everyone can use their tool (e.g. Prolog)
In contrast, content or domain-oriented research and theories suffer
from the following ills:
1. They're "ad hoc" (i.e. referring to THIS specific thing or
other).
2. They have very complicated semantics, poorly understood,
hard to extend, fragile, etc. etc.
However, many of the most interesting problems pop up in trying
to solve a specific problem which, if solved, would yield insight
into intelligence. Tools, for the most part, are neutral with respect
to content-oriented research questions. What does Prolog or Lisp
have to say to me about building a "Dear Abby" natural language
understanding and personal advice-giving program? Not much.
The semantics of lisp or prolog says little about the semantics of the
programs which researchers are trying to discover/write in Prolog or Lisp.
Tools are tools. You take the best ones off the shelf you can find for
the task at hand. I love tools and keep an eye out for
tools-developments with as much interest as anyone else. But I don't
fool myself into thinking that the availability of a tool will solve my
research problems.
{Of course no theory is exlusively one or the other. Also, there are
LEVELS of tools & content for each theory. This levels aspect causes
great confusion.}
By and large, AIList discussions (when they get around to something
specific) center too much around TOOLS and not PROCESS MODELS (ie
SPECIFIC programs, predicates, rules, memory organizations, knowledge
constructs, etc.).
What distinguishes AI from compilers, OS, networking, or other aspects
of CS are the TASKS that AI-ers choose. I want computers that can read
"War and Peace" -- what problems have to be solved, and in what order,
to achieve this goal? Telling me "use logic" is like telling me
to "use lambda calculus" or "use production rules".
Part III. Use and abuse of jargon in AI.
Someone recently commented in this digest on the abuse of jargon in AI.
Since I'm from the Yale school, and since Yale commonly gets accused of
this, I'm going to say a few words about jargon.
Different jargon for the same tools is BAD policy. Different jargon
to distinguish tools from content is GOOD policy. What if Schank
had talked about "logic" instead of "Conceptual Dependencies"?
What a mistake that would have been! Schank was trying to specify
how specific meanings (about human actions) combine during story
comprehension. The fact that prolog could be used as a tool to
implement Schank's conceptual dependencies is neutral with respect
to what Schank was trying to do.
At IJCAI-83 I heard a paper (exercise for the reader to find it)
which went something like this:
The work of Dyer (and others) has too many made-up constructs.
There are affects, object primitives, goals, plans, scripts,
settings, themes, roles, etc. All this terminology is confusing
and unnecessary.
But if we look at every knowledge construct as a schema (frame,
whatever term you want here), then we can describe the problem much
more elegantly. All we have to consider are the problems of:
frame activation, frame deactivation, frame instantiation, frame
updating, etc.
Here, clearly we have a tools/content distinction. Wherever
possible I actually implemented everything using something like
frames-with-procedural-attachment (ie demons). I did it so that I
wouldn't have to change my code all the time. My real interest,
however, was at the CONTENT level. Is a setting the same as an emotion?
Does the task: "Recall the last 5 restaurant you were at" evoke the
same search strategies as "Recall the last 5 times you accomplished x",
or "the last 5 times you felt gratitude."? Clearly, some classes of
frames are connected up to other classes of frames in different ways.
It would be nice if we could discover the relevant classes and it's
helpful to give them names (ie jargon). For example, it turns out that
many (but not all) emotions can be represented in terms of abstract goal
situations. Other emotions fall into a completely different class (e.g.
religious awe, admiration). In my program "love" was NOT treated as
(at the content level) an affect.
When I was at Yale, at least once a year some tools-oriented person
would come through and give a talk of the form: "I can
represent/implement your Scripts/Conceptual-Dependency/
Themes/MOPs/what-have-you using my tool X" (where X = ATNs, Horn
clauses, etc.).
I noticed that first-year students usually liked such talks, but the
advanced students found them boring and pointless. Why? Because if
you're content-oriented you're trying to answer a different set of
questions, and discussion of the form: "I can do what you've already
published in the literature using Prolog" simply means "consider Prolog
as a nice tool" but says nothing at the content level, which is usually
where the advanced students are doing their research.
I guess I'm done. That'll keep me for a year.
-- Michael Dyer
------------------------------
Date: Mon 28 Nov 83 08:59:57-PST
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: CS Colloq 11/29: John Seely Brown
[Reprinted from the SU-SCORE bboard.]
Tues, Nov 29, 3:45 MJH refreshments; 4:15 Terman Aud (lecture)
A COMPUTATIONAL FRAMEWORK FOR A QUALITATIVE PHYSICS--
Giving computers "common-sense" knowledge about physical mechanisms
John Seely Brown
Cognitive Sciences
Xerox, Palo Alto Research Center
Humans appear to use a qualitative causal calculus in reasoning about
the behavior of their physical environment. Judging from the kinds
of explanations humans give, this calculus is quite different from
the classical physics taught in classrooms. This raises questions as
to what this (naive) physics is like, how it helps one to reason
about the physical world and how to construct a formal calculus that
captures this kind of reasoning. An analysis of this calculus along
with a system, ENVISION, based on it will be covered.
The goals for the qualitative physics are i) to be far simpler than
classical physics and yet retain all the important distinctions
(e.g., state, oscillation, gain, momentum), ii) to produce causal
accounts of physical mechanisms, and (3) to provide a logic for
common-sense, causal reasoning for the next generation of expert
systems.
A new framework for examining causal accounts has been suggested
based on using collections of locally interacting processors to
represent physical mechanisms.
------------------------------
End of AIList Digest
********************
∂29-Nov-83 2220 GOLUB@SU-SCORE.ARPA IBM meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 Nov 83 22:19:52 PST
Date: Tue 29 Nov 83 22:18:58-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: IBM meeting
To: faculty@SU-SCORE.ARPA
I append the message from George Paul of IBM. At today's lunch, there seemed
to be a fairly negative attitude about this meeting. I personally believe
that we should consider getting together with some IBM representatives
though perhaps we shoud be careful about the agenda.
At any rate, please let me know whether you would be willing to participate.
If you definitely do not want to be involved let me know that too.
GENE
Here is the Paul message to me.
With regard to the meeting I suggested during our conversation at Norfolk,
let me re-iterate. The purpose of the meeting would be to provide the
opportunity for our departments to get to know each other better and to
explore areas of possible cooperative research either formally or informally.
Toward this goal, I would like to select with you, say four topics of mutual
research interest to Stanford and to us. These topics would then serve as the
basis for a "mini-symposium" in which selected speakers from your faculty and
graduate students, and research staff members from Yorktown and San Jose would
present technical papers on their current research. A symposium along these
lines could I believe be scheduled for a day and a half or two days in duration.
We would like the program to be relaxed and informal, allowing plenty of time
for discussions. IBM would host an evening banquet either prior to the
first day or between the first and second days for people to become better
acquainted.
In addition to the speakers, I would invite appropriate management from
Yorktown, San Jose and the Palo Alto Scientific Center, including Herb Schorr
and the directors from our department. I would like to schedule the meeting
for some time early next year, convenient to your class schedules and
during which Herb is available. The meeting could be held on-site at Stanford
if it is convenient, or I can arrange for appropriate facilities in the area.
We would also like to invite faculty from CSL and CIS. I have spoken to
Mike Flynn about this possibility previously.
Areas of interest to us would include: computer architecture and organization
including parallel processing, workstations and local-area networks, VLSI and
design automation, AI and expert systems, programming technology, etc.
Please let me know your thoughts regarding meeting, subjects of interest to
you and possible dates.
George
-------
jmc - While IBM isn't as strong in computer science as its expenditures
over a long period should have made them, I think the contact is worthwhile,
and I will be glad to take part in the meeting. There are many interesting
people at Yorktown and some at San Jose.
∂30-Nov-83 0817 KJB@SRI-AI.ARPA Burstall's visit
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Nov 83 08:16:59 PST
Date: Wed 30 Nov 83 08:17:35-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Burstall's visit
To: csli-folks@SRI-AI.ARPA
Rod Burstall, the remaining member of our Advisory Panel, is here to
visit this week. I hope everyone interested in area C will speak
with him.
Jon
-------
∂30-Nov-83 0941 @SRI-AI.ARPA:BrianSmith.pa@PARC-MAXC.ARPA Area C Meeting with Rod Burstall
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Nov 83 09:40:54 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Wed 30 Nov 83 09:40:56-PST
Date: 30 Nov 83 09:33 PDT
From: BrianSmith.pa@PARC-MAXC.ARPA
Subject: Area C Meeting with Rod Burstall
To: CSLI-Folks@SRI-AI.ARPA
cc: BrianSmith.pa@PARC-MAXC.ARPA
We will have a general area C meeting, at 11:00 a.m. this Friday, Dec.
2, at Ventura Hall, to meet with Rod Burstall. It will be a chance for
him to get to know us, and for us all to talk about general directions,
projects, interests, and problems that we see in this area. See you
there.
Brian
∂30-Nov-83 1030 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA Next week's colloquium
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Nov 83 10:30:35 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Wed 30 Nov 83 10:29:50-PST
Date: Wed, 30 Nov 83 10:26 PST
From: desRivieres.PA@PARC-MAXC.ARPA
Subject: Next week's colloquium
To: csli-folks@sri-ai.ARPA
Could someone please tell me who is giving next week's colloquium. I
need to know by this afternoon, so that it can be announced in the
newsletter. Thanks.
∂30-Nov-83 1123 TAJNAI@SU-SCORE.ARPA Call for Bell Fellowship Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 30 Nov 83 11:22:58 PST
Date: Wed 30 Nov 83 11:20:42-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: Call for Bell Fellowship Nominations
To: faculty@SU-SCORE.ARPA
cc: JF@SU-SCORE.ARPA
We have received the call for Bell Fellowship nominations.
Send me the name of a student you wish to nominate by Monday,
December 12.
The Bell Fellowship is for 4 years, so it should be a student who
will finish in approximately 4 years. Only US citizens are qualified.
We will receive ONE fellowship only, and they have requested that we
nominate 2 or 3 students from the department.
This is a CSD Fellowship not a Forum Fellowship.
Carolyn
-------
∂30-Nov-83 1131 GROSZ@SRI-AI.ARPA A&B meeting postponed
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Nov 83 11:31:02 PST
Date: Wed 30 Nov 83 11:31:27-PST
From: Barbara J. Grosz <GROSZ@SRI-AI.ARPA>
Subject: A&B meeting postponed
To: csli-principals@SRI-AI.ARPA
cc: bmacken@SRI-AI.ARPA
In going over the budget for tomorrow's meeting, Betsy found that things
were quite different--and, we think MUCH BETTER--from what we had
previously. Betsy wants some more time to make sure we have things
straight, so we will not meet until sometime next week.
Will let you know more for sure when we do.
Barbara
-------
∂30-Nov-83 1435 PATASHNIK@SU-SCORE.ARPA phone number for prospective applicants
Received: from SU-SCORE by SU-AI with TCP/SMTP; 30 Nov 83 14:33:48 PST
Date: Wed 30 Nov 83 14:27:23-PST
From: Student Bureaucrats <PATASHNIK@SU-SCORE.ARPA>
Subject: phone number for prospective applicants
To: students@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA, secretaries@SU-SCORE.ARPA,
research-associates@SU-SCORE.ARPA
cc: bureaucrat@SU-SCORE.ARPA
Reply-To: bureaucrat@score
(415) 497-4112 is the phone number that prospective applicants can
call between 1 and 2pm on any weekday or between 2:30 and 3:30pm
on Tuesdays. The person answering the phone will be in MJH 450,
so you can send any prospective applicant who shows up in person
during those times up to 450 to have questions answered.
--Oren and Yoni, bureaucrats
-------
∂30-Nov-83 1647 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: limitations of logic
Received: from MIT-MC by SU-AI with TCP/SMTP; 30 Nov 83 16:46:53 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 30 Nov 83 19:44-EST
Received: From Csnet-Cic.arpa by UDel-Relay via smtp; 30 Nov 83 19:36 EST
Date: 30 Nov 83 17:49:24 EST (Wed)
From: Don Perlis <perlis%umcp-cs@csnet-cic.arpa>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: limitations of logic
To: GAVAN%MIT-OZ%mit-mc.arpa@udel-relay.arpa,
Don Perlis <perlis%umcp-cs%csnet-cic.arpa@udel-relay.arpa>
Cc: phil-sci%mit-oz%mit-mc.arpa@udel-relay.arpa
Via: UMCP-CS; 30 Nov 83 18:20-EST
From: GAVAN%MIT-OZ%mit-mc.arpa@UDel-Relay
The "language with self-reference" conjecture is
interesting, but it's still only a conjecture. How can
we ever possibly know that the alpha is the omega, or
that at the base of reality, as its primary
constituent, is the entire universe. This is all very
religious.
From Perlis:
On the contrary, I don't suggest any of this as dogma.
I don't suggest that it is either, only that it's unknowable.
But we don't *know* that it's unknowable! You're mistaking your
inability to know it now, for an inability to know it ever. We don't
know well enough what knowing is, to say what can't be known. You are
taking this as given, or obvious: I say it is at present dogma.
∂30-Nov-83 1657 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: tarski on meaning, again
Received: from MIT-MC by SU-AI with TCP/SMTP; 30 Nov 83 16:56:59 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 30 Nov 83 19:44-EST
Received: From Csnet-Cic.arpa by UDel-Relay via smtp; 30 Nov 83 19:37 EST
Date: 30 Nov 83 18:02:09 EST (Wed)
From: Don Perlis <perlis%umcp-cs@csnet-cic.arpa>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: tarski on meaning, again
To: John McLean <mclean%nrl-css.arpa@udel-relay.arpa>,
PHIL-SCI%mit-mc.arpa@udel-relay.arpa
Via: UMCP-CS; 30 Nov 83 18:21-EST
From: John McLean <mclean%nrl-css.arpa@UDel-Relay>
In stating that one can accept Tarski's definition of truth
while rejecting meanings completely I was thinking primarily of
Quine. For Quine translation does not preserve meaning, but
only behavioristically discernable behavior. As a consequence,
translation is indeterminate since any translation manual that
preserves behavior is equally correct. This constitutes a
rejection of the traditional concept of "meaning" since
meanings were regarded as what could separate a correct
behavioristically adequate translation manual from an incorrect
one. With respect to your example, why doesn't
'(x)(y)(x+y=y+x)' mean that for all ordinals x and y, their
ordinal sum is invariant with respect to order?
What is the difference between this, and saying integer addition is
commutative? Or do you mean to allow also infinite ordinals? If the
latter, then there is a non-behavior-preserving difference: infinite
ordinals are not sum-invariant under order reversal.
∂30-Nov-83 1706 @MIT-MC:perlis%umcp-cs@CSNET-CIC Re: Model Theoretic Ontologies
Received: from MIT-MC by SU-AI with TCP/SMTP; 30 Nov 83 17:06:37 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 30 Nov 83 19:44-EST
Received: From Csnet-Cic.arpa by UDel-Relay via smtp; 30 Nov 83 19:38 EST
Date: 30 Nov 83 18:16:35 EST (Wed)
From: Don Perlis <perlis%umcp-cs@csnet-cic.arpa>
Return-Path: <perlis%umcp-cs@CSNet-Relay>
Subject: Re: Model Theoretic Ontologies
To: John Batali <Batali%MIT-OZ%mit-mc.arpa@udel-relay.arpa>,
DAM%MIT-OZ%mit-mc.arpa@udel-relay.arpa,
BATALI%MIT-OZ%mit-mc.arpa@udel-relay.arpa
Cc: phil-sci%MIT-OZ%mit-mc.arpa@udel-relay.arpa
Via: UMCP-CS; 30 Nov 83 18:34-EST
From DAM:
The ontology of model-theoretic semantics is given by the
models. First order logic has one particular kind of model
but richer logics could have richer kinds of models and yet
still be based on Tarskian semantics. Thus Tarskian
semantics allows for arbitrarilly rich ontologies!!
From: John Batali <Batali%MIT-OZ%mit-mc.arpa@UDel-Relay>
Okay fine. It sounds like the claim is that Tarskian semantics ALLOWS
for arbitrarily rich ontologies. But to really get representation
right, we have to HAVE an adequately rich ontology. I was arguing, and
I think that you accept, that a model in which statements are either
true or not is just insufficient. And I think that the point of Carl's
polemic against "logic programming" is that such is all the model you
seem to get in what logic programmers currently call logic.
You miss the point. Tarskian semantics shows the possibilities for
models (ontologies). A *particular* one amounts to picking a particular such
model.
But to do programming, there must be some notion of "process"
in the ontology, some idea of things happening in some temporal
relation. There must be some notion of the kinds of objects
that there can be, and what sorts of relations can hold among
them. The point is that a good representation language has to
be more than just "logic." It must be, say, logic with some
ontological committments as to what sorts of things are out
there to be described. In such a case we would be USING logic
to do representation, but we would not be using JUST logic.
You are using the word 'logic' in a way not standard in the
'mathematical logic community. *A* *logic* can have in it whatever
axioms we like. Perhaps you are referring to a pure predicate calculus?
This will have no axioms other than (essentially) tautologies.
Logic alone is inadequate, the argument goes because it, by
itself, presupposes only the assumption that "the true" exists.
Presuppositions of, say, processes, and time and so on can be
represented in logic, but in this case we are using logic to
represent our theory of the world and "meanings" are defined in
terms of that theory, not logic.
This is just terminology.
∂30-Nov-83 1726 KJB@SRI-AI.ARPA Tomorrow a.m
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Nov 83 17:26:31 PST
Date: Wed 30 Nov 83 17:26:46-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Tomorrow a.m
To: csli-folks@SRI-AI.ARPA
Rod Burstall will be around from 9 on, in case any of you would
like to talk with him then. There is no executive committee
meeting, Bob.
-------
∂30-Nov-83 2011 @MIT-ML:crummer@AEROSPACE Model Theoretic Ontologies
Received: from MIT-ML by SU-AI with TCP/SMTP; 30 Nov 83 20:11:12 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 30 Nov 83 23:08-EST
Date: Wed, 30 Nov 83 19:18:53 PST
From: Charlie Crummer <crummer@AEROSPACE>
To: DAM%MIT-OZ@MIT-MC.ARPA
CC: PHIL-SCI@MIT-MC
Subject: Model Theoretic Ontologies
In-reply-to: <DAM.11971237931.BABYL@MIT-OZ>
In re: "Dogs are mammals."
If this statement assumes the existence of the model "mammals" then it
calls for a comparison of the attributes of the set "dogs" with the attributes
comprising the model "mammals". If the attributes match (the mammalness
attributes), then the statement can be said to be true.
If the statement is a declaration intended to define (create) the model
"mammals" then the "intersection" (forgive me, set theorists) of all the
attributes of the examples used to define the model, e.g. "Whales are mammals;
Bats are mammals; etc., serves as the definition of the model "mammals". Note
that according to this interpretation of the sentence it is neither true nor
false.
In re: "Reagan is President."
If this statement assumes the existence of the model "President" then one
should be able to test its truth by examining the match between presidential
attributes and some attributes of Reagan's. If one includes statesmanship and
intelligence as presidential attributes the statement is demonstrably false.
(Sorry for the outburst.) It is true, however, so this understanding of the
sentence is therefore erroneous.
Of course, the statement is true not because of any attributes of Reagan
himself but because someone (the Speaker of the House?) has been empowered
to perform a ceremony and, under certain circumstances, i.e. his election,
DECLARE him president. The statement is true for this reason only.
--Charlie
∂30-Nov-83 2316 JRP@SRI-AI.ARPA Outsiders and Insiders
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Nov 83 23:16:24 PST
Date: Wed 30 Nov 83 23:10:59-PST
From: John Perry <JRP@SRI-AI.ARPA>
Subject: Outsiders and Insiders
To: csli-principals@SRI-AI.ARPA
Dear friends,
A while back Jon mentioned to us that the advisory committee noted that some
people felt there were insiders and outsiders, and since then some people have
expressed that this is so. Here is my comment, for what it is worth.
There isn't much point in denying that we do have a problem, since this seems
like a sort of self-certifying problem. Its worth trying to understand where
it came from and how things might evolve to lessen the problem.
It was about a year and a month ago that this whole thing got started. But
after the two SDF board meetings, there was lull until well into the winter
quarter. To me, upon reflection, the most amazing thing about what happened in
the last nine or ten months is not the amount of dollars per month of proposal
writing, but how differently the Center is from what it was planned to be. (The
second most amazing thing is how well it is working.)
I invented the term "principal". The idea was that while only Stanford regular
faculty could be PI's, we wanted equal status for the non-Stanford
participants. So principal meant someone who would be a principal investigator
if it werent for the rules, and that meant anyone who seemed to have that
status on the individual proposals which, at that time, were at the heart of
the thing.
Along the way, decisions about principalhood were made on other bases too,
though. A couple were added to have access to their wisdom and prestige.
Others who might have been principals were not made such, for reasons of
balance and because after a while it became important that things quit
changing. But still the conception I had was that the Center would be a
consortium of groups of equals, each with a pot of money and other goodies,
with a director to gently oversee things.
It is important to realize that this conception is ancient history. I couldn't
pull it off. The first version of the proposal was built around it, and it was
rejected, emphatically, in less time than it took to print it. SDF wanted, and
eventually got, something quite different. Moreover, they may be right. I
have come to have enormous repsect for CS and his board. And I am enjoying the
Center enormously (every day now I see things happening I know nothing about,
and it feels real good). But anyway, SDF refused to have the PI-ship
distributed, even among the Stanford participants. It wanted an integrated
research project, unique though its structure may be, with clear lines of
authority. And they created conditions in which that authority will have to be
exercized if we are to succeed.
This ancient coneption is important, because it, together with the enormous
pressure of time, affected the way things were done between Februay and July 1.
We started out having meetings of the principals which were full of openness
and communication--although we all felt a little on the outside of CS's thought
processes, which was where the action was. As time went on, it increasingly
became Brian, Barbara and I making decisions because there just wasn't time to
do it any other way; I often felt there wasn't even time for me to get to know
the people and ideas I was endlessly writing and talking about. The idea that
in a while we could relax in our leisurely democratic center made this easier
to take. I think feelings developed then--perfectly justified ones--are part
of what is coming to the surface now, and that's only natural. But the present
leadership shouldn't be held accountable for sins of the past, or for the
failure of the future that promised to make those sins less venial to come to
pass.
Barwise is in charge in a sense in which he never wanted or intended to be. He
did not sieze power. He was talked into being director at a point when the
earlier conception was the plan. This was during the aforementioned lull, when
everything stood still until a director was found. If he hadn't agreed, that
lull would still be going on. Moreover, what he agreed to do was to take over
a functioning and funded Center. He was, of course, faced with something quite
different. We weren't funded, the proposal, although accepted in a sense, had
to be bolstered by a plan of research, and an enormous amount of negotiation
and detail remained to be worked out, some of which still remains. Everyone's
grand plans, and some people's very salaries, rested on his shoulders. They
still do. If the infrastructure isn't in place and the integrated research
plan working in a few months, the Center will wither away.
The point of all of this is as follows.
1. We will not have the same feelings of being on the inside and knowing what
is going on that we would under the earlier conception. The Center isn't a
democracy with a figurehead. It is, at present, a research project with a
leader. We all know that the Center doesnt equal the situated language
project, and I think with time we will be able to exploit that fact and evolve
more comfortable structures. But it will take time and it is not our most
pressing problem.
2. We are extremely lucky to have Barwise here and in charge. No one else
agreed to do it and moreover I dont think anyone else could have done it.
Communication and good feelings are one of many problems he has to face; they
are problem he by nature worried about and can be trusted to work out. But
frankly I think they shouldn't be at the top of the agenda right now.
3. One step we can take is for the members of the exec. committee to
immediately increase their efforts to communicate what goes on to their natural
consituencies.
4. Another step is to create our own insides to be in. When the Center is
functioning properly, the important decisions will all be decisions about what
to do, what research projects, visitors, conferences, meetings, etc. will
happen. The action will be where it should be, in initiatives of the
principals, affiliates and associates. Policies will have been set, budgets
arrived at, procedures established, and the damn computers wired up, offices
secured for everyone, consulting professorhsips approved, the education program
in place. Happily, we are getting close to this. When we get there we must
face what seems to me a big problem, making sure our director isn't so busy
that he is on the outside of what matters.
-------
∂01-Dec-83 0224 RESTIVO@SU-SCORE.ARPA PROLOG Digest V1 #56
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Dec 83 02:23:53 PST
Date: Wednesday, November 30, 1983 10:23PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V1 #56
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Thursday, 1 Dec 1983 Volume 1 : Issue 56
Today's Topics:
Puzzle - The Lady or the Tiger,
Announcement - '84 LP Symposium Correction,
Implementations - Assert & Retract
----------------------------------------------------------------------
Date: Tue, 29 Nov 83 11:15 EST
From: Chris Moss <Moss.UPenn@Rand-Relay>
Subject: The Lady or the Tiger
Since it's getting near Christmas, here are a few puzzlers to
solve in Prolog. They're taken from Raymond Smullyan's delightful
little book of the above name. Sexist allusions must be forgiven.
There once was a king, who decided to try his prisoners by giving
them a logic puzzle. If they solved it they would get off, and
get a bride to boot; otherwise ...
The first day there were three trials. In all three, the king
explained, the prisoner had to open one of two rooms. Each room
contained either a lady or a tiger, but it could be that there
were tigers or ladies in both rooms.
On each room he hung a sign as follows:
I II
In this room there is a lady In one of these rooms there is
and in the other room a lady and in one of these
there is a tiger rooms there is a tiger
"Is it true, what the signs say ?", asked the prisoner.
"One of them is true", replied the king, "but the other one is false"
If you were the prisoner, which would you choose (assuming, of course,
that you preferred the lady to the tiger) ?
---------------------------------------------------------
For the second and third trials, the king explained that either
both statements were true, or both are false. What is the
situation ?
Signs for Trial 2:
I II
At least one of these rooms A tiger is in the
contains a tiger other room
---------------------------------------------------------
Signs for Trial 3:
I II
Either a tiger is in this room A lady is in the
or a lady is in the other room other room
---------------------------------------------------------
Representing the problems is much more difficult than finding the
solutions. The latter two test a sometime ignored aspect of the
language.
Have fun !
------------------------------
Date: Wed 30 Nov 83 17:39:43-PST
From: Pereira@SRI-AI
Subject: 1984 IEEE Logic Programming Symposium
It has come to te attention of the organizers that hotel
reservations for this symposium will not be accepted unless
the correct registration form is used. Please write for a
registration form to
Registration - 1984 ISLP
Doug DeGroot, Program Chairman
IBM Thomas J. Watson Research Center
P.O. Box 218
Yorktown Heights, NY 10598
or
Fernando Pereira
AI Center
SRI International
333 Ravenswood Ave., Menlo Park, CA 94925
ARPAnet: Pereira@SRI-AI
UUCP: ...!ucbvax!Pereira@SRI-AI
-- Fernando Pereira
------------------------------
Date: 27-Nov-83 18:42:48-CST (Sun)
From: Gabriel@ANL-MCS (John Gabriel)
Subject: Notes on Assert/Retract
I made some comments about assert/retract in Issue #50. Steve
Wolff (Steve@BRL-BMD) rightly questioned (in private
correspondence) the meaning of what I had said. After re-reading
The Archive, I think much of the discussion has been published by
others or is not relevant to anyone but Steve and I. But a few
things continue to nag, including a question not addressed in with
Steve, so here are a brief comment about assert/retract in general
with a question at the end, and an example of what seems to me a
reasonable with use with a request for alternatives.
First the comment:-
The expositions by Richard O'Keefe, Steve Hardy, and others convince
me that assert/retract are useful even if not strictly necessary,
that they will continue with us, that they have implementation
costs borne even by those who do not use them, and that their use
can be inappropriate and even dangerous.
So the question is:- How to distinguish between appropriate and
inappropriate use, and once having done that, can we invent
constructs usable only appropriately ? I don't know, but does
anybody have suggestions ?
Second the example, a rather long one I fear but, at least to me
interesting.
Since an R-S flip flop can be built from nand gates alone, and it
"remembers" whether the R or S input was last temporarily pulled
low (set to zero), what happens in "naive" simulation of an R-S ff
in "pure" Prolog. This is not an entirely academic question about
global variables to keep history, I need to deal with a similar
practical issue in an Automated Diagnostician for a set of relay
logic.
One possible approach is the simulation algorithm of my note
ANL-83-70, filed in {SU-SCORE}PS:<Prolog>. Here it is:-
/* First the definition of a nand gate */
devt(nand,[a,b],[y]). /* Names of terminals on a generic device */
devd(nand,[1,1],[0]). /* nand(1,1)=0 */
devd(nand,[1,0],[1]).
devd(nand([0,1],[1]).
devd(nand([0,0],[1]).
/* now define the wiring of an R-S ff */
signal(r,nand,1,a). /* signal "r" is connected to terminal "a" of
signal(s,nand,2,a). nand #1 */
signal(q,nand,1,y).
signal(q1,nand,2,y).
signal(q1,nand,1,b).
signal(q,nand,2,b).
One can write a "signal tracer" recursing from outputs to inputs
(see ANL-83-70), that works if the signal flow graph is acyclic.
But this one isn't, and the recursion goes round and round a
feedback loop indefinitely. This is just an example of the problem
in transitive closure.
But a "compiler" may be written to transform the devt and signal
predicates to a Prolog rule. (The compiler will be placed in
<Prolog>, as soon as the report is made publicly available). The
resulting rule is:-
program([←r,←s], [←q,←q1]) :-
devd(nand,[←r,←q1],[←q]),
devd(nand,[←s,←q],[←q1]).
The CProlog "query"
:- program([R,S],[Q,Q1]).
returns pairs [R,S] [Q,Q1] as follows
[1,1] [1,0]
[1,1] [0,1]
[1,0] [0,1]
[0,1] [1,0]
[0,0] [1,1]
Note the two distinct solutions for [R,S] = [1,1]
In a representation by analogue circuits they are the limit
points at large time of solutions with [R,S]=[1,1].
The inputs [1,0] and [0,1] drive the system one to each limit
point, and it the inputs later are returned to [1,1], and the
hardware functions as intended, the system will stay at the
limit point after the input transiton back to [1,1].
I need to model this phenomenon. One fairly obvious way is to
compile the signal flow graph specification to an augmented
program as follows:-
program([←r,←s],[←q,←q1]):-
devd(nand,[←r,←q1],[←q]),
devd(nand,[←s,←q],[←q1]),
retract(state(←,←)),
assert(state(←q,←q1)).
A "setup" process at the start asserts some standard state.
This is exactly what pressing the "reset" button does on your
personal computer.
What is the point of all this ? Well,I do have a practical
objective of writing an automated diagnostician dealing with
memory in more or less this way. So I need some global data
structure holding memory of past system states.
I'd like suggestions of the best way to implement this data
structure, together with warnings about "nonos" for various
Prolog implementations.
At present we are running CProlog 1.2 on a VAX 11/780, but the
eventual target CPU may be a dedicated MC68000. A number of
other issues have to be addressed such as the proper interface
for use by a maintenance technician walking the plant carrying
a voltage and current probe. If no suitable Prolog is available
for which we can write special predicates to manage the user
interface (which might be a full duplex speech channel, and FM
telemetry data link), then we may need to consider "pure"
solutions using Maurice Bruynooghe's Prolog in Pascal because
we feel we understand this well enough to extend it as necessary.
------------------------------
End of PROLOG Digest
********************
∂01-Dec-83 0846 EMMA@SRI-AI.ARPA rooms
Received: from SRI-AI by SU-AI with TCP/SMTP; 1 Dec 83 08:46:36 PST
Date: Thu 1 Dec 83 08:46:54-PST
From: EMMA@SRI-AI.ARPA
Subject: rooms
To: csli-folks@SRI-AI.ARPA
If anyone out there knows of a room that can be used for a
potluck dinner and can hold about 100 people, I would appreciate
knowing about it. (This is CSLI business.)
Emma@sri-ai
ps. I received a lot of replies about the recycling bins. Ventura
Hall will receive an aluminum bin as soon as one becomes
available. Please remember to recycle paper (including
newspapers) but remove staples.
-------
∂01-Dec-83 0851 DKANERVA@SRI-AI.ARPA Newsletter No. 11, December 1, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 1 Dec 83 08:50:39 PST
Date: Thu 1 Dec 83 08:09:46-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter No. 11, December 1, 1983
To: csli-friends@SRI-AI.ARPA
CSLI Newsletter
December 1, 1983 * * * Number 11
VISIT BY ROD BURSTALL
Rod Burstall, the remaining member of our Advisory Panel, is here
to visit this week. I hope that especially those interested in CSLI's
research area C (theories of situated computer languages) will take
this opportunity to speak with him.
- Jon Barwise
* * * * * * *
MEETING OF RESEARCHERS IN AREAS A AND B
The meeting for the A and B Area people will be held from 1:00 to
2:00 p.m., Thursday, December 1, in the Ventura Conference Room after
TINLunch.
MEETING OF RESEARCHERS IN AREA C
The researchers associated with projects in Area C will have a
general meeting at 11:00 a.m. this Friday, December 2, at Ventura
Hall, to meet with Rod Burstall. It will be a chance for him to get
to know us, and for us all to talk about general directions, projects,
interests, and problems that we see in this area.
JOINT MEETING FOR PROJECTS B3 AND B5
At the joint meeting of B3 and B5 on Wednesday, November 30,
Geoff Nunberg discussed his paper "Individuation in Context." Next
Wednesday, December 7, Phil Cohen will talk about indirect speech
acts. His talk will be at 9 a.m. in Ventura Hall.
MEETING FOR PROJECTS C1-D1
On November 29 and December 6, Yiannis Moschovakis will speak to
the CSLI C1-D1 working group, held each Tuesday at 9:30 at PARC. His
topic is: "On the Foundations of the Theory of Algorithms." These
talks will present in outline an abstract (axiomatic) theory of
recursion, which aims to capture the basic properties of recursion and
recursive functions on the integers, much like the theory of metric
spaces captures the basic properties of limits and continuous
functions on the reals. The basic notion of the theory is a (suitable,
mathematical representation of) algorithms. In addition to classical
recursion, the models of the theory include recursion in higher types,
positive elementary induction, and similar theories constructed by
logicians, but they also include pure Lisp, recursion schemes, and the
familiar programming languages (as algorithm describers).
Technically, one can view this work as the theory of many-sorted,
concurrent, and (more significantly) second-order recursion schemes.
The first lecture concentrates on the pure theory of recursion
and describes some of the basic results and directions of this theory.
The second lecture looks at some of the less developed connections of
this theory with the foundations of computer science, particularly the
relation between an algorithm and its implementations.
* * * * * * *
! Page 2
* * * * * * *
CSLI SCHEDULE FOR *THIS* THURSDAY, DECEMBER 1, 1983
10:00 Research Seminar on Natural Language
Speaker: Paul Kiparsky (MIT)
Topic: On lexical phonology and morphology.
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Paul Martin (SRI)
Paper for discussion: "Planning English Referring Expressions"
by Douglas Appelt
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speaker: Luca Cardelli (Bell Labs)
Title: "Type Systems in Programming Languages"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Charles Bigelow (CS, Stanford)
Title: "Selected Problems in Visible Language"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available
in a lot located just off Campus Drive, across from the construction
site.
* * * * * * *
! Page 3
* * * * * * *
TINLUNCH SCHEDULE
TINLunch is held at 12 noon each Thursday at Ventura Hall on the
Stanford University campus as a part of CSLI activities. Copies of
TINLunch papers will be at SRI in EJ251 and at Stanford University in
Ventura Hall.
On Thursday, December 1, Paul Martin will lead the discussion.
The paper for discussion will be:
"Planning English Referring Expressions"
by Douglas Appelt
This paper describes a theory of language generation based on
planning. The theory is illustrated through a detailed examination of
the problem of planning referring expressions. This theory provides a
framework in which one can account for noun phrases used to refer, to
supply additional information, and to clarify communicative intent
through coordination with the speaker's nonlinguistic actions. The
theory is embodied in a computer system called KAMP, which plans both
linguistic and nonlinguistic actions when given a high-level
description of the speaker's goals.
NEXT WEEK (DEC. 8):
Robert Moore will be leading the TINLunch discussion on a paper by
Daniel Dennett entitled "Cognitive Wheels: The Frame Problem of AI."
* * * * * * *
WHY CONTEXT WON'T GO AWAY
On Tuesday, November 29, Peter Gardenfors, who is visiting CSLI
this year from Lund University in Sweden, gave a talk entitled "An
Epistemic Semantics for Conditionals." The abstract is given below.
A semantics for different kinds of conditional sentences is
outlined. The ontological basis is states of belief and changes of
belief rather than possible worlds and similarities between worlds. It
is shown how the semantic analysis can account for some of the context
dependence of the interpretation of conditionals.
Next week's speaker: Ivan Sag
December 6, 1983, 3:15 p.m.
Ventura Hall
* * * * * * *
! Page 4
* * * * * * *
COMPUTER SCIENCE COLLOQUIUM NOTICE WEEK OF NOV 28-DEC 2
11/28/1983 Robotics Seminar
Monday Clyde Coombs
4:15 p.m. Hewlett Packard
MJ252 Manufacturing Strategy for Information and Automation
11/29/1983 Knowledge Representation Group Seminar
Tuesday Bob Blum
2:30-3:30 Stanford CSD
TC-135 (Med School) Representing Clinical Causal Relations in the RX
Knowledge Base
11/29/1983 CS Colloquium
Tuesday John Seely Brown
4:15 Cognitive Sciences, Xerox PARC
Terman Aud. A Computational Framework for a Qualitative
Physics--Giving computers `common-sense' knowledge
about physical mechanisms
11/30/1983 Special Tutorial
Wednesday Dr. Adrian Walker
2:00 - 4:00 IBM Research Lab, San Jose
MJH 252 Introduction to PROLOG and Its Applications
11/30/1983 Talkware Seminar
Wednesday Amy Lansky
2:15-4:00 Stanford U./SRI
380Y (Math Corner) GEM: A Methodology for Specifying Concurrent Systems
12/02/1983 Database Research Seminar
Friday David Dewitt
3:15 p.m. University of Wisconsin
MJH 352 Benchmarking Database Management Systems and Machines
* * * * * * *
! Page 5
* * * * * * *
TALKWARE SEMINAR - CS 377
Date: November 30
Speaker: Amy Lansky (Stanford / SRI)
Topic: Specification of Concurrent Systems
Time: 2:15 - 4
Place: 380Y (Math corner)
This talk will describe the use of GEM, an event-oriented model
for specifying and verifying properties of concurrent systems. The
GEM model may be broken up into two components: computations and
specifications. A GEM computation is a formal representation of
concurrent execution. Program executions, as well as activity in
other domains, may be modeled. A GEM specification is a set of logic
formulas that may be applied to GEM computations. These formulas are
used to restrict computations in such a way that they form
characterizations of specific problems or represent executions of
specific languages.
A primary result of my research with GEM has been a demonstration
of the power and breadth of an event-oriented approach to concurrent
activity. The model has been used successfully to describe various
language primitives (the Monitor, CSP, ADA tasks), several problems,
including two distributed algorithms, and for verifying concurrent
programs.
In this seminar, I will introduce some of the important features
of GEM and demonstrate their use in modeling many familiar
computational behavior patterns, including sequentiality,
nondeterminism, priority, liveness, fairness, and scope.
Specification of language concepts such as data abstraction,
primitives such as CSP's synchronous I/O, and familiar problems
(Readers/Writers) will be included. This talk will also discuss
directions for further research based on GEM. One possibility is the
use of graphics for the construction and simulation of GEM
specifications.
Date: December 7
Speaker: Donald Knuth (Stanford CS)
Topic: On the Design of Programming Languages
Time: 2:15 - 4
Place: 380Y (Math Corner)
Date: December 14
Speaker: Everyone
Topic: Summary and discussion
Time: 2:15 - 4
Place: 380Y (Math Corner)
* * * * * * *
! Page 6
* * * * * * *
SEMINAR IN LOGIC AND FOUNDATIONS OF MATHEMATICS
SPEAKER: Professor J. E. Fenstad, University of Oslo
TITLE: Peanos's existence theorem for ordinary differential equations
in reverse mathematics and non standard analysis.
TIME: Wednesday, Nov. 30, 4:15-5:30 PM
PLACE: Stanford Mathematics Dept. Faculty Lounge (383-N)
Abstract: We continue the exposition of Steve Simpson's work on
reverse mathematics, locating the exact position for the provability
of Peanos's theorem. It follows that the nonstandard proof is more
constructive than the standard textbook proof.
* * * * * * *
1984 INTERNATIONAL SYMPOSIUM ON LOGIC PROGRAMMING
Atlantic City, New Jersey
February 6-9, 1984
Sponsored by the IEEE Computer Society
Registration details from:
Registration - 1984 ISLP
Doug DeGroot, Program Chairman
IBM Thomas J. Watson Research Center
P.O. Box 218
Yorktown Heights, NY 10598
or from (ARPANET): PEREIRA@SRI-AI
The opening address will be given by Professor J. A. (Alan)
Robinson of Syracuse University. The guest speaker will be Professor
Alain Colmerauer of the University of Aix-Marseille II, Marseille,
France. The keynote speaker will be Dr. Ralph E. Gomory, IBM Vice
President & Director of Research, IBM Thomas J. Watson Research
Center. On February 6, Ken Bowen of Syracuse University will present
"Tutorial: An Introduction to Prolog." Finally, during the remaining
three days, February 7-9, 35 papers will be presented in 11 sessions.
* * * * * * *
-------
∂01-Dec-83 0905 KJB@SRI-AI.ARPA Your memo
Received: from SRI-AI by SU-AI with TCP/SMTP; 1 Dec 83 09:05:41 PST
Date: Thu 1 Dec 83 08:57:39-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Your memo
To: JRP@SRI-AI.ARPA
cc: csli-principals@SRI-AI.ARPA
Thanks, John, for your memo. I do appreciate the support.
In connection with your 4th point, about creating our own insides to
be in: I have tried to facilitate that by creating groups to handle
what seem to me the interesting things that you mention. HOwever,
rather than see that as were the action is, as I hoped, people seem
to resent the committees they are on, and several of them are not
functioning as far as I can tell. This makes everyone's complaints
about being on the outside particulary hard to take, and your memo
all the more welcome.
Jon
-------
∂01-Dec-83 1143 GOLUB@SU-SCORE.ARPA Next meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Dec 83 11:43:18 PST
Date: Thu 1 Dec 83 11:42:42-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Next meeting
To: CSD-Senior-Faculty: ;
cc: bscott@SU-SCORE.ARPA
The next meeting will take place on Tuesday, Dec 6 at 2:30.
We need to discuss the re-appointments and the search for chairperson.
GENE
-------
∂01-Dec-83 1430 GOLUB@SU-SCORE.ARPA Consulting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Dec 83 14:30:11 PST
Date: Thu 1 Dec 83 14:29:54-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Consulting
To: CSD-Faculty: ;
Just a reminder that I would like to receive your completed disclosure
form as soon as possible. GENE
-------
∂01-Dec-83 1555 @SU-SCORE.ARPA:reid@Glacier official rumor
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Dec 83 15:54:55 PST
Received: from Glacier by SU-SCORE.ARPA with TCP; Thu 1 Dec 83 15:54:14-PST
Date: Thursday, 1 December 1983 15:53:27-PST
To: Faculty@Score, CSLFaculty@Sierra
Subject: official rumor
From: reid@Glacier
From: Brian Reid <reid@Glacier>
George Pake, Xerox V.P. of Research, today announced to PARC employees
that DEC has hired Bob Taylor, Butler Lampson, and Chuck Thacker as the
core of a new DEC Computer Science Research Lab here in Palo Alto (or
perhaps Los Altos). His message was followed by a long explanation of
how PARC would continue to be a great place to work.
∂01-Dec-83 1656 GOLUB@SU-SCORE.ARPA course scheduling
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Dec 83 16:54:23 PST
Date: Thu 1 Dec 83 16:48:24-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: course scheduling
To: faculty@SU-SCORE.ARPA
Colleagues!
I am really being driven up the wall by all the proposed changes in
courses and schedules. The courses for this academic year have been scheduled
for some time. The schedule was sent to each of you. Once the courses and
times have been published in COURSES and DEGREES than we must teach those
courses in the time, dates and quarters scheduled. The students read
COURSES and DEGREES very seriously and depend upon it for accuracy.
We can not hire substitute lecturers. Each time we bring in an outside
lecturer we must pay their salary. Members of other departments are also
paid ( and we even pay overhead to EE). We are very overdrawn on the
lecturing budget.
I insist no changes should be made unless there are very unusual conditions.
GENE
-------
∂01-Dec-83 1714 BMACKEN@SRI-AI.ARPA Staff meeting times
Received: from SRI-AI by SU-AI with TCP/SMTP; 1 Dec 83 17:14:34 PST
Date: Thu 1 Dec 83 16:36:25-PST
From: BMACKEN@SRI-AI.ARPA
Subject: Staff meeting times
To: csli-friends@SRI-AI.ARPA
I'll be meeting with the CSLI administrative staff on Friday mornings
at 8:30. We need this hour each week for planning, etc. It means that
during this time most phones at CSLI will not be answered. If you have
an emergency that can't wait until 9:30 call 497-0628 and let it ring
long enough for one of us to get from the conference room to the lobby
to answer it.
B.
-------
∂01-Dec-83 1803 @SU-SCORE.ARPA:lantz@diablo Re: course scheduling
Received: from SU-SCORE by SU-AI with TCP/SMTP; 1 Dec 83 18:03:00 PST
Received: from Diablo by SU-SCORE.ARPA with TCP; Thu 1 Dec 83 18:02:38-PST
Date: Thu, 1 Dec 83 18:02:29 pst
To: Gene Golub <GOLUB@SU-SCORE.ARPA>
Cc: faculty@SU-SCORE.ARPA
Subject: Re: course scheduling
In-Reply-To: Your message of Thu 1 Dec 83 16:48:24-PST.
From: Keith Lantz <lantz@diablo>
It would indeed be a much more organized world if we could live by
schedules devised up to a year in advance. And, in fact, I suspect
many of the scheduling problems would go away if professors/lecturers
were consulted about times, not just quarters and courses. I for one do not
appreciate being "placed" in time slots over which I have no control.
Fortunately, this has not happened often and the way I have always
handled it has been to reschedule the class by negotiating WITH the
class. Nevertheless, it would be a good idea for the course schedulers
to bear the professors' likes and dislikes into account, and in advance.
Keith
∂02-Dec-83 0153 LAWS@SRI-AI.ARPA AIList Digest V1 #107
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Dec 83 01:53:32 PST
Date: Thu 1 Dec 1983 21:58-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #107
To: AIList@SRI-AI
AIList Digest Friday, 2 Dec 1983 Volume 1 : Issue 107
Today's Topics:
Programming Languages - Lisp Productivity,
Alert - Psychology Today,
Learning & Expert Systems,
Intelligence - Feedback Model & Categorization,
Scientific Method - Psychology,
Puzzle - The Lady or the Tiger,
Seminars - Commerce Representation & Learning Linguistic Categories
----------------------------------------------------------------------
Date: 27 Nov 83 16:57:39-PST (Sun)
From: decvax!tektronix!tekcad!franka @ Ucb-Vax
Subject: Re: lisp productivity question - (nf)
Article-I.D.: tekcad.145
I don't have any documentation, but I heard once from an attendee
at a workshop on design automation that someone had reported a 5:1 productivity
improvement in LISP vs. C, PASCAL, etc. From personal experience I know this
to be true, also. I once wrote a game program in LISP in two days. I later
spent two weeks debugging the same game in a C version (I estimated another
factor of 4 for a FORTRAN version). The nice thing about LISP is not that
the amount of code written is less (although it is, usually by a factor of
2 to 3), but that its environment (even in the scrungy LISPs) is much easier
to debug and modify code in.
From the truly menacing,
/- -\ but usually underestimated,
<-> Frank Adrian
(tektronix!tekcad!franka)
[A caveat: Lisp is very well suited to the nature of game programs.
A fair test would require that data processing and numerical analysis
problems be included in the mix of test problems. -- KIL]
------------------------------
Date: Mon, 28 Nov 83 11:03 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: Psychology Today
The December issue of Psychology Today (V 17, #12) has some more articles
that may be of interest to AI people. The issue is titled "USER FRIENDLY"
and talks about technological advances that have made machines easier.
The articles of interest are:
On Papert, Minsky, and John Anderson page 26
An Article written by McCarthy page 46
An Interview with Alan Kay Page 50
(why they call him the Grand old Man is
beyond me, Alan is only 43)
- steve
------------------------------
Date: Tue 29 Nov 83 18:36:01-EST
From: Albert Boulanger <ABOULANGER@BBNG.ARPA>
Subject: Learning Expert systems
Re: Brint Cooper's remark on non-learning expert systems being "dumb":
Yes, some people would agree with you. In fact, Dr. R.S. Michalski's group
at the U of Illinois is building an Expert System, ADVISE, that incorporates
learning capabilities.
Albert Boulanger
ABOULANGER@BBNG
------------------------------
Date: Wed, 30 Nov 83 09:07 PST
From: NNicoll.ES@PARC-MAXC.ARPA
Subject: "Intelligence"
I see Intelligence as the sophistication of the deep structure
mechanisms that generate both thought and behavior. These structures
(per Albus), work as cross-coupled hierarchies of phase-locked loops,
generating feedback hypotheses about the stimulus at each level of the
hierarchy. These feedback hypotheses are better at predicting and
matching the stimulus if the structure holds previous patterns that are
similar to the present stimulus. Therefore, intelligence is a function
of both the amount of knowledge possible to bring to bear on pattern
matching a present problem (inference), and the number of levels in the
structure of the hierarchy the organism (be it mechanical or organic)
can bring to bear on breaking the stimulus/pattern down into its
component parts and generate feedback hypotheses to adjust the organisms
response at each level.
I feel any structure sufficiently complex to exhibit intelligence, be it
a bird-brained idiot whose height of reasoning is "find fish - eat
fish", or "Deep Thought" who can break down the structures and reason
about a whole world, should be considered intelligent, but with
different "amounts" of intelligence, and possibly about different
experiences. I do not think there is any "threshold" above which an
organism can be considered intelligent and below which they are not.
This level would be too arbitrary a structure for anything except very
delimited areas.
So, lets get on with the pragmatic aspects of this work, creating better
slaves to do our scut work for us, our reasoning about single-mode
structures too complex for a human brain to assimilate, our tasks in
environments too dangerous for organic creatures, and our tasks too
repetitious for the safety of the human brain/body structure, and move
to a lower priority the re-creation of pseudo-human "intelligence". I
think that would require a pseudo-human brain structure (combining both
"Emotion" and "Will") that would be interesting only in research on
humanity (create a test-bed wherein experiments that are morally
unacceptable when performed on organic humans could be entertained).
Nick Nicoll
------------------------------
Date: 29 Nov 83 20:47:33-PST (Tue)
From: decvax!ittvax!dcdwest!sdcsvax!sdcsla!west @ Ucb-Vax
Subject: Re: Intelligence and Categorization
Article-I.D.: sdcsla.461
From: AXLER.Upenn-1100@Rand-Relay
(David M. Axler - MSCF Applications Mgr.)
I think Tom Portegys' comment in 1:98 is very true.
Knowing whether or not a thing is intelligent, has a soul,
etc., is quite helpful in letting us categorize it. And,
without that categorization, we're unable to know how to
understand it. Two minor asides that might be relevant in
this regard:
1) There's a school of thought in the fields of
linguistics, folklore, anthropology, and folklore, which is
based on the notion (admittedly arguable) that the only way
to truly understand a culture is to first record and
understand its native categories, as these structure both
its language and its thought, at many levels. (This ties in
to the Sapir-Whorf hypothesis that language structures
culture, not the reverse...) From what I've read in this
area, there is definite validity in this approach. So, if
it's reasonable to try and understand a culture in terms of
its categories (which may or may not be translatable into
our own culture's categories, of course), then it's equally
reasonable for us to need to categorize new things so that
we can understand them within our existing framework.
Deciding whether a thing is or is not intelligent seems to be a hairier
problem than "simply" categorizing its behavior and other attributes.
As to point #1, trying to understand a culture by looking at how it
categorizes does not constitute a validation of the process of
categorization (particularly in scientific endeavours). Restated: There
is no connection between the fact that anthropologists find that studying
a culture's categories is a very powerful tool for aiding understanding,
and the conclusion that we need to categorize new things to understand them.
I'm not saying that categorization is useless (far from it), but Sapir-Whorf's
work has no direct bearing on this subject (in my view).
What I am saying is that while deciding to treat something as "intelligent",
e.g., a computer chess program, may prove to be the most effective way of
dealing with it in "normal life", it doesn't do a thing for understanding
the thing. If you choose to classify the chess program as intelligent,
what has that told you about the chess program? If you classify it
as unintelligent...? I think this reflects more upon the interaction
between you and the chess program than upon the structure of the chess
program.
-- Larry West UC San Diego
-- ARPA: west@NPRDC
-- UUCP: ucbvax!sdcsvax!sdcsla!west
-- or ucbvax:sdcsvax:sdcsla:west
------------------------------
Date: 28 Nov 83 18:53:46-PST (Mon)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: Rational Psych & Scientific Method
Article-I.D.: ncsu.2416
Well, I hope this is the last time ....
Again, I have been accused of ignorance; again the accustation is false.
Its fortunate only my words can make it into this medium. I would
appreciate the termination of this discussion, but will not stand by
and be patronized without responding. All sane and rational people,
hit the <del> and go on to the next news item please.
When I say psychologists do not do very good science I am talking about
the exact same thing you are talking about. There is no escape. Those
"rigorous" experiments sometime succeed in establishing some "facts",
but they are sufficiently encumbered by lack of controls that one often
does not know what to make of them. This is not to imply a critisism of
psychologists as intellectually inferior to chemists, but the field is
just not there yet. Is Linguistics a science? Is teaching a science?
Laws (and usually morals) prevent the experiments we need, to do REAL
controlled experiments; lack of understanding would probably prevent
immediate progress even in the absence of those laws. Its a bit like
trying to make a "scientific" study of a silicon wafer with 1850's tools
and understanding of electronics. A variety of interesting facts could
be established, but it is not clear that they would be very useful. Tack
on some I/O systems and you could then perhaps allow the collection of
reams of timing and capability data and could try to corrollate the results
and try to build theories -- that LOOKS like science. But is it? In
my book, to be a science, there must be a process of convergence in which
the theories more ever closer to explaining reality, and the experiments
become ever more precise. I don't see much convergence in experimental
psychology. I see more of a cyclic nature to the theories ....
----GaryFostel----
P.S. There are a few other sciences which do not deserve
the title, so don't feel singled out. Computer
Science for example.
------------------------------
Date: Tue, 29 Nov 83 11:15 EST
From: Chris Moss <Moss.UPenn@Rand-Relay>
Subject: The Lady or the Tiger
[Reprinted from the Prolog Digest.]
Since it's getting near Christmas, here are a few puzzlers to
solve in Prolog. They're taken from Raymond Smullyan's delightful
little book of the above name. Sexist allusions must be forgiven.
There once was a king, who decided to try his prisoners by giving
them a logic puzzle. If they solved it they would get off, and
get a bride to boot; otherwise ...
The first day there were three trials. In all three, the king
explained, the prisoner had to open one of two rooms. Each room
contained either a lady or a tiger, but it could be that there
were tigers or ladies in both rooms.
On each room he hung a sign as follows:
I II
In this room there is a lady In one of these rooms there is
and in the other room a lady and in one of these
there is a tiger rooms there is a tiger
"Is it true, what the signs say ?", asked the prisoner.
"One of them is true", replied the king, "but the other one is false"
If you were the prisoner, which would you choose (assuming, of course,
that you preferred the lady to the tiger) ?
-------------------------
For the second and third trials, the king explained that either
both statements were true, or both are false. What is the
situation ?
Signs for Trial 2:
I II
At least one of these rooms A tiger is in the
contains a tiger other room
Signs for Trial 3:
I II
Either a tiger is in this room A lady is in the
or a lady is in the other room other room
Representing the problems is much more difficult than finding the
solutions. The latter two test a sometimes ignored aspect of the
[Prolog] language.
Have fun !
------------------------------
Date: 27 Nov 1983 20:42:46-EST
From: Mark.Fox at CMU-RI-ISL1
Subject: AI talk
[Reprinted from the CMU-AI bboard.]
TITLE: Databases and the Logic of Business
SPEAKER: Ronald M. Lee, IIASA Austria & LNEC Portugal
DATE: Monday, Nov. 28, 1983
PLACE: MS Auditorium, GSIA
ABSTRACT: Business firms differentiate themsleves with special products,
services, etc. Nevertheless, commercial activity requires certain
standardized concepts, e.g., a common temporal framework, currency of
exchange, concepts of ownership and contractual obligation. A logical data
model, called CANDID, is proposed for modelling these standardized aspects
in axiomatic form. The practical value is the transportability of this
knowledge across a wide variety of applications.
------------------------------
Date: 30 Nov 83 18:58:27 PST (Wednesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 12/1/83
Professor Roman Lopez de Montaras
Politecnico Universidade Barcelona
A Learning System for Linguistic Categorization of Soft
Observations
We describe a human-guided feature classification system. A person
teaches the denotation of subjective linguistic feature descriptors to
the system by reference to examples. The resulting knowledge base of
the system is used in the classification phase for interpetation of
descriptions.
Interpersonal descriptions are communicated via semantic translations of
subjective descriptions. The advantage of a subjective linguistic
description over more traditional arithmomorphic schemes is their high
descriptor-feature consistency. This is due to the relative simplicity
of the underlying cognitive process. This result is a high feature
resolution for the overall cognitive perception and description
processes.
At present the system is still being used for categorization of "soft"
observations in psychological research, but application in any
person-machine system are conceivable.
------------------------------
End of AIList Digest
********************
∂02-Dec-83 0947 KJB@SRI-AI.ARPA ARea C meeting with Burstall
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Dec 83 09:47:00 PST
Date: Fri 2 Dec 83 09:46:01-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: ARea C meeting with Burstall
To: csli-folks@SRI-AI.ARPA
Just a reminder of today's meeting with Burstall for everyone
interested in area C -- 11:00 am here. It is important for him
to be able to assess our strenghts and weaknesses, so please
come if you are in area C at all.
-------
∂02-Dec-83 1115 @SU-SCORE.ARPA:ullman@diablo Computer Use Committee
Received: from SU-SCORE by SU-AI with TCP/SMTP; 2 Dec 83 11:14:56 PST
Received: from Diablo by SU-SCORE.ARPA with TCP; Fri 2 Dec 83 11:13:43-PST
Date: Fri, 2 Dec 83 11:13 PST
From: Jeff Ullman <ullman@diablo>
Subject: Computer Use Committee
To: faculty@score
The committee, consisting of Andre Broder, Keith Lantz, Victoria Pigman,
and me met to consider the policy regarding student use of SCORE for
classes. We concluded that to allow unrestricted use of SCORE
for classwork in upperclass courses (137 and above) would obligate
the CSD to pay about $200K/year in SCORE charges, and that this
was therefore not an alternative.
We propose to:
1. Ask the Vice-provost for computing for the funds to run a
selected set of courses on SCORE, the exact number depending on
the funds (if any) forthcoming, and the amount of unsubscribed time
on SCORE.
2. Alleviate some of the pressure our students feel at LOTS, allow
students to use SCORE accounts for document preparation for coursework.
There would be no increase in the subsidy for CSD supported accounts,
nor did we feel one was necessary.
Note that we do not run up against charges of discriminating in favor
of CSD students in courses, because no course to our knowledge
*requires* assignments to be prepared on LOTS.
Does this meet the approval of the faculty?
Comments to the committee will be appreciated.
∂02-Dec-83 1342 GOLUB@SU-SCORE.ARPA Help needed
Received: from SU-SCORE by SU-AI with TCP/SMTP; 2 Dec 83 13:42:15 PST
Date: Fri 2 Dec 83 13:41:15-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Help needed
To: faculty@SU-SCORE.ARPA
cc: jutta@SU-SCORE.ARPA
We need another person or two for the Ph D admissions committee.
Any volunteers?
GENE
-------
∂02-Dec-83 1403 GOLUB@SU-SCORE.ARPA Vote for Consulting Professors
Received: from SU-SCORE by SU-AI with TCP/SMTP; 2 Dec 83 14:03:51 PST
Date: Fri 2 Dec 83 14:03:48-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Vote for Consulting Professors
To: Academic-Council: ;
There is a desire to appoint several members od CSLI as consulting
professors of this department. According to Bower, these appointments
would not constrain our appointment of other consulting professors.
Here is the proposal with the associated rank. Please vote for the
entire block. I would like your vote by Tuesday Dec 6 at 5PM.
The papers on each prospective appointee is in Elyse's office.
GENE
PS I am sending out hard copy. If you reply by electronic mail, there
is no need to send in a ballot.
-----------------------------------------------------------------------
Martin Kay Consulting Professor
Robert Moore Consulting Associate Professor
Barbara Grosz Consulting Associate Professor
Raymond Perrault Consulting Associate Professor
Brian Smith Consulting Assistant Professor
Stanley Rosenschein Consulting Assistant Professor
---------------------------------------------------------------------------
YES
I vote NO on all six appointments.
ABSTAIN
-----------------------
NAME
-------
Yes on all 6.
∂02-Dec-83 2044 LAWS@SRI-AI.ARPA AIList Digest V1 #108
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Dec 83 20:44:19 PST
Date: Fri 2 Dec 1983 16:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #108
To: AIList@SRI-AI
AIList Digest Saturday, 3 Dec 1983 Volume 1 : Issue 108
Today's Topics:
Editorial Policy,
AI Jargon,
AI - Challenge Responses,
Expert Systems & Knowledge Representation & Learning
----------------------------------------------------------------------
Date: Fri 2 Dec 83 16:08:01-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Editorial Policy
It has been suggested that the volume on this list is too high and the
technical content is too low. Two people have recently written to me
suggesting that the digest be converted to a magazine format with
perhaps a dozen edited departments that would constitute alternating
special issues.
I appreciate their offers to serve as editors, but have no desire to
change the AIList format. The volume has been high, but that is
typical of new lists. I encourage technical contributions, but I do
not wish to discourage general-interest discussions. AIList provides
a forum for material not appropriate to journals and conferences --
"dumb" questions, requests for information, abstracts of work in
progress, opinions and half-baked ideas, etc. I do not find these a
waste of time, and attempts to screen any class of "uninteresting"
messages will only deprive those who are interested in them. A major
strength of AIList is that it helps us develop a common vocabulary for
those topics that have not yet reached the textbook stage.
If people would like to split off their own sublists, I will be glad
to help. That might reduce the number of uninteresting messages
each reader is exposed to, although the total volume of material would
probably be higher. Narrow lists do tend to die out as their boom and
bust cycles gradually lengthen, but AIList could serve as the channel
by which members could regroup and recruit new members. The chief
disadvantage of separate lists is that we would lose valuable
cross-fertilization between disciplines.
For the present, I simply ask that members be considerate when
composing messages. Be concise, preferably stating your main points
in list form for easy reference. Remember that electronic messages
tend to seem pugnacious, so that even slight sarcasm may arouse
numerous rebuttals and criticisms. It is unnecessary to marshall
massive support for every claim since you will have the opportunity to
reply to critics. Also, please keep in mind that AIList (under my
moderatorship) is primarily concerned with AI and pattern recognition,
not psychology, metaphysics, philosophy of science, or any other topic
that has its own major following. We welcome any material that
advances the progress of intelligent machines, but the hard-core
discussions from other disciplines should be directed elsewhere.
-- Ken Laws
------------------------------
Date: Tue 29 Nov 83 21:09:12-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Re: Dyer's flame
In this life of this list a number of issues, among them intelligence,
parallelism and AI, defense of AI, rational psychology, and others have
been maligned as "pointless" or whatever. Without getting involved in a
debate on "philosophy" vs. "real research", a quick scan of these topics
shows them to be far from pointless. I regret that Dyer's students have
stopped reading this list; perhaps they should follow his advice of submitting
the right type of article to this list.
As a side note, I am VERY interested in having people outside of mainstream
AI participate in this list; while one sometimes wades through muddled articles
of little value, this is more than repaid by the fresh viewpoints and
occasional gem that would have been otherwise never been found.
Ken Laws has done an excellent job grouping the articles by interest and
topic; uninterested readers can then skip reading an entire volume, if the
theme is uninteresting. A greater number of articles submitted can only
improve this process; the burden is on those unsatisfied with the content of
this board to submit them. I would welcome submissions of the kind suggested
by Dr. Dyer, and hope that others will follow his advice and try to lead the
board to whatever avenue they think is the most interesting. There's room
here for all of us...
David Rogers
DRogers@SUMEX-AIM.ARPA
------------------------------
Date: Tue 29 Nov 83 22:24:14-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Tools
I agree with Michael Dyer's comments on the lack of substantive
material in this list and on the importance of dealing with
new "real" tasks rather than using old solutions of old problems
to show off one's latest tool. However, I feel like adding two
comments:
1. Some people (me included) have a limited supply of "writing energy"
to write serious technical stuff: papers, proposals and the like.
Raving about generalities, however, consumes much less of that energy
per line than the serious stuff. The people who are busily writing
substantive papers have no energy left to summarize them on the net.
2. Very special tools, in particular fortunate situations
("epiphanies"?!) can bring a new and better level of understanding of a
problem, just by virtue of what can be said with the new tool, and
how. Going the other direction, we all know that we need to change our
tools to suit our problems. The paradigmatic relation between subject
and tool is for me the one between classical physics and mathematical
analysis, where tool and subject are intimately connected but yet
distinct. Nothing of the kind has yet happened in AI (which shouldn't
surprise us, seeing at how long it took to develop that other
relationship...).
Note: Knowing of my involvement with Prolog/logic programming, some
reader of this might be tempted to think "Ahah! what he is really
driving at is that logic/Horn clauses/Prolog [choose one] is that kind
of tool for AI. Let me kill that presumption in the bud, these tool
addicts are dangerous!" Gentle reader, save your flame! Only time will
show whether anything of the kind is the case, and my private view on
the subject is sufficiently complicated (confused?) that if I could
disentangle it and write about it clearly I would have a paper rather
than a net message...
Fernando Pereira
------------------------------
Date: Wed 30 Nov 83 11:58:56-PST
From: Wilkins <WILKINS@SRI-AI.ARPA>
Subject: jargon
I understand Dyer's comments on what he calls the tool/content distinction.
But it seems to me that the content distinctions he rightly thinks are
important can often be expressed in terms of tools, and that it would be
clearer to do so. He talked about handling one's last trip to the restaurant
differently from the last time one is in love. I agree that this is an
important distinction to make. I would like to see the difference expressed
in "tools", e.g., "when handling a restaurant trip (or some similar class of
events) our system does a chronological search down its list of events, but
when looking for love, it does a best first search on its list of personal
relationships." This is clearer and communicates more than saying the system
has a "love-MOP" and a "restaurant-script". This is only a made up example
-- I am not saying Mr. Dyer used the above words or that he does not explain
things well. I am just trying to construct a non-personal example of the
kind of thing to which I object, but that occurs often in the literature.
------------------------------
Date: Wed, 30 Nov 83 13:47 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: McCarthy and 'mental' states
In the December Psychology Today John McCarthy has a short article that
raises a fairly contentious point.
In his article he talks about how it is not necessarily a bad thing that
people attribute "human" or what the calls 'mental' attributes to complex
systems. Thus when someone anthropomorphises the actions of his/her
car, boat, or terminal, one is engaging in a legitimate form of description
of a complex process.
Indeed he argues further that while currently most computer programs
can still be understood by their underlying mechanistic properties,
eventually complex expert systems will only be capable of being described
by attributing 'mental' states to them.
----
I think this is the proliferation of jargon and verbiage that
Ralph Johnson noted is associated with
a large segment of AI work. What has happened is not a discovery or
emulation of cognitive processes, but a break-down of certain weak
programmers' abilities to describe the mechanical characteristics of
their programs. They then resort to arcane languages and to attributing
'mental' characteristics to what are basically fuzzy algorithms that
have been applied to poorly formalized or poorly characterized problems.
Once the problems are better understood and are given a more precise
formal characterization, one no longer needs "AI" techniques.
- Steven Gutfreund
------------------------------
Date: 28 Nov 83 23:04:58-PST (Mon)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Re: Clarifying my 'AI Challange' - (nf)
Article-I.D.: uiucdcs.4190
re: The Great Promises of AI
Beware the promises of used car salesmen. The press has stories to
sell, and so do the more extravagant people within AI. Remember that
many of these people had to work hard to convince grantmakers that AI
was worth their money, back in the days before practical applications
of expert systems began to pay off.
It is important to distinguish the promises of AI from the great
fantasies that have been speculated by the media (and some AI
researchers) in a fit of science fiction. AI applications will
certainly be diverse and widespread (thanks no less to the VLSI
people). However, I hope that none of us really believes that machines
will possess human general intelligence any time soon. We banter about
such stuff hoping that when ideas fly, at least some of them will be
good ones. The reality is that nobody sees a clear and brightly lit
path from here to super-intelligent robots. Rather we see hundreds of
problems to be solved. Each solution should bring our knowledge and
the capabilities of our programs incrementally forward. But let's not
kid ourselves about the complexity of the problems. As it has already
been pointed out, AI is tackling the hard problems -- the ones for
which nobody knows any algorithms.
------------------------------
Date: Wed, 30 Nov 83 10:29 PST
From: Tong.PA@PARC-MAXC.ARPA
Subject: Re: AI Challenge
Tom Dietterich:
Your view of "knowledge representations" as being identical with data
structures reveals a fundamental misunderstanding of the knowledge vs.
algorithms point. . .Why, I'll bet there's not a single AI program that
uses leftist-trees or binomial queues!
Sanjai Narain:
We at Rand have ROSS. . .One implementation of ROSS uses leftist trees for
maintaining event queues. Since these queues are in the innermost loop
of ROSS's operation, it was only sensible to make them as efficient as
possible. We think we are doing AI.
Sanjai, you take the letter but not the spirit of Tom's reflection. I
don't think any AI researcher would object to improving the efficiency
of her program, or using traditional computer science knowledge to help.
But - look at your own description of ROSS development! Clearly you
first conceptualized ROSS ("queues are the innermost loop") and THEN
worried about efficiency in implementing your conceptualization ("it was
only sensible to make them as efficient as possible"). Traditional
computer science can shed much light on implementation issues, but has
in practice been of little direct help in the conceptualization phase
(except occasionally by analogy and generalization). All branches of
computer science share basic interests such as how to represent and use
knowledge, but AI differs in the GRAIN SIZE of the knowledge it
considers. It would be very desirable to have a unified theory of
computer science that provides ideas and tools along the continuum of
knowledge grain size; but we are not quite there, yet. Until that time,
perceiving the different branches of computer science as contributing
useful knowledge to different levels of implementation (e.g. knowledge
level, data level, register transfer level, hardware level) is probably
the best integration our short term memories can handle.
Chris Tong
------------------------------
Date: 28 Nov 83 22:25:35-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: RJ vs AI: Science vs Engineering? - (nf)
Article-I.D.: uiucdcs.4187
In response to Johnson vs AI, and Tom Dietterich's defense:
The emergence of the knowledge-based perspective is only the beginning of
what AI has achieved and is working on. Obvious corollaries: knowledge
acquisition and extraction, representation, inference engines.
Some rather impressive results have been obtained here. One with which I
am most familiar is work being done at Edinburgh by the Machine Intelligence
Research Unit on knowledge extraction via induction from user-supplied
examples (the induction program is commercially available). A paper by
Shapiro (Alen) & Niblett in Computer Chess 3 describes the beginnings of the
work at MIRU. Shapiro has only this month finished his PhD, which effectively
demonstrates that human experts, with the aid of such induction programs,
can produce knowledge bases that surpass the capabilities of any expert
as regards their completeness and consistency. Shapiro synthesized a
totally correct knowledge base for part of the King-and-Pawn against
King-and-Rook chess endgame, and even that relatively small endgame
was so complex that, though it was treated in the chess literature, the
descriptions provided by human experts consisted largely of gaps. Impressively,
3 chess novices managed (again with the induction program) to achieve 99%
correctness in this normally difficult problem.
The issue: even novices are better at articulating knowledge
by means of examples than experts are at articulating the actual
rules involved, *provided* that the induction program can represent
its induced rules in a form intelligible to humans.
The long-term goal and motivation for this work is the humanization of
technology, namely the construction of systems that not only possess expert
competence, but are capable of communicating their reasoning to humans.
And we had better get this right, lest we get stuck with machines that run our
nuclear plants in ways that are perhaps super-smart but incomprehensible ...
until a crisis happens, when suddenly the humans need to understand what the
machine has been doing until now.
The problem: lack of understanding of human cognitive psychology. More
specifically, how are human concepts (even for these relatively easy
classification tasks) organized? What are the boundaries of 'intelligibility'?
Though we are able to build systems that function, in some ways, like a human
expert, we do not know much about what distinguishes brain-computable processes
from general algorithms.
But we are learning. In fact, I am tempted to define this as one criterion
distinguishing knowledge-based AI from other computing: the absolute necessity
of having our programs explain their own processing. This is close to demanding
that they also process in brain-compatible terms. In any case we will need to
know what the limits of our brain-machine are, and in what forms knowledge
is most easily apprehensible to it. This brings our end of AI very close to
cognitive psychology, and threatens to turn knowledge representation into a
hard science -- not just
What does a system need, to be able to X?
but How does a human brain produce behavior/inference X, and how do
we implement that so as preserve maximal man-machine compatibility?
Hence the significance of the work by Shapiro, mentioned above: the
intelligibility of his representations is crucial to the success of his
knowledge-acquisition method, and the whole approach provides some clues on
how a humane knowledge representation might be scientifically determined.
A computer is merely a necessary weapon in this research. If AI has made little
obvious progress it may be because we are too busy trying to produce useful
systems before we know how they should work. In my opinion there is too little
hard science in AI, but that's understandable given its roots in an engineering
discipline (the applications of computers). Artificial intelligence is perhaps
the only "application" of computers in which hard science (discovering how to
describe the world) is possible.
We might do a favor both to ourselves and to psychology if knowledge-based AI
adopted this idea. Of course, that would cut down drastically on the number of
papers published, because we would have some very hard criteria about what
comprised a tangible contribution. Even working programs would not be
inherently interesting, no matter what they achieved or how they achieved it,
unless they contributed to our understanding of knowledge, its organization
and its interpretation. Conversely, working programs would be necessary only
to demonstrate the adequacy of the idea being argued, and it would be possible
to make very solid contributions without a program (as opposed to the flood of
"we are about to write this program" papers in AI).
So what are we: science or engineering? If both, let's at least recognize the
distinction as being valuable, and let's know what yet another expert system
proves beyond its mere existence.
Marcel Schoppers
U of Illinois @ Urbana-Champaign
------------------------------
End of AIList Digest
********************
∂04-Dec-83 0908 @SU-SCORE.ARPA:uucp@Shasta Re: official rumor
Received: from SU-SCORE by SU-AI with TCP/SMTP; 4 Dec 83 09:08:22 PST
Received: from Shasta by SU-SCORE.ARPA with TCP; Sun 4 Dec 83 09:08:05-PST
Received: from decwrl by Shasta with UUCP; Sun, 4 Dec 83 09:07 PST
Date: 4 Dec 1983 0901-PST (Sunday)
Sender: uucp@Shasta
From: decwrl!baskett (Forest Baskett) <decwrl!baskett@Shasta>
Subject: Re: official rumor
Message-Id: <8312041701.AA01099@DECWRL>
Received: by DECWRL (3.327/4.09) 4 Dec 83 09:01:09 PST (Sun)
To: Brian Reid <Glacier!reid@Shasta>, Faculty@Score, CSLFaculty@Sierra
Cc: White@Sierra, Linvill@Sierra, Meindl@Sierra, Gibbons@Sierra
In-Reply-To: Your message of Thursday, 1 December 1983 15:53:27-PST.
<8312020005.AA29459@DECWRL>
George Pake's announcement was not quite accurate. The facts are that
DEC has hired Bob Taylor and has made offers to Chuck Thacker and
Butler Lampson with the intention of setting up a new computer systems
research center in Palo Alto. DEC, of course, has and will continue to
operate under the highest corporate ethical principles in setting up
this new research center, as in all its other dealings.
To those of you who are interested, I'll be happy to supply what few
other details there are on this exciting new venture.
Forest (Baskett@Score)
∂04-Dec-83 1748 PPH Course Anouncement - SWOPSI 160
To: funding@SU-AI
Matt Nicodemus will be offering a SWOPSI course this winter entitled
"Military Funding for Research at Stanford."
Meetings will be Tuesdays and Thursdays from 7 to 8:30 PM. The room will
be announced later.
Major themes to be addressed include how research is used by the military,
the degree of university involvement with military-funded research, the
impacts of military sponsorship on the University, the nation, and the
rest of the world, military-industrial-university connections, and the
1971 SWOPSI study, "DoD Sponsored Funding for Research at Stanford."
Class meetings will consist of discussions of readings and talks presented
by a number of speakers. A reading list and course outline can be
obtained at the SWOPSI office (590-A Old Union).
All members of the Stanford community are welcome to attend.
∂05-Dec-83 0250 LAWS@SRI-AI.ARPA AIList Digest V1 #109
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Dec 83 02:49:28 PST
Date: Sun 4 Dec 1983 22:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #109
To: AIList@SRI-AI
AIList Digest Monday, 5 Dec 1983 Volume 1 : Issue 109
Today's Topics:
Expert Systems & VLSI - Request for Material,
Programming Languages - Productivity,
Editorial Policy - Anonymous Messages,
Bindings - Dr. William A. Woods,
Intelligence,
Looping Problem,
Pattern Recognition - Block Modeling,
Seminars - Programs as Predicates & Explainable Expert System
----------------------------------------------------------------------
Date: Sun, 4 Dec 83 17:59:53 PST
From: Tulin Mangir <tulin@UCLA-CS>
Subject: Request for Material
I am preparing a tutorial and a current bibliography, for IEEE,
of the work in the area of expert system applications to CAD and computer aided
testing as well as computer aided processing. Specific emphasis is
on LSI/VLSI design, testing and processing. I would like this
material to be as complete and as current as we can all make. So, if you
have any material in these areas that you would like me to include
in the notes, ideas about representation of structure, knowledge,
behaviour of digital circuits, etc., references you know of,
please send me a msg. Thanks.
Tulin Mangir <cs.tulin@UCLA-cs>
(213) 825-2692
825-4943 (secretary)
------------------------------
Date: 29 Nov 83 22:25:19-PST (Tue)
From: sri-unix!decvax!duke!mcnc!marcel@uiucdcs.UUCP (marcel )@CCA
Subject: Re: lisp productivity question - (nf)
Article-I.D.: uiucdcs.4197
And now a plug from the logic programming people: try prolog for easy
debugging. Though it may take a while to get used to its modus operandi,
it has one advantage that is shared by no other language I know of:
rule-based computing with a clean formalism. Not to mention the ease
of implementing concepts such as "for all X satisfying P(X) do ...".
The end of cumbersome array traversals and difficult boolean conditions!
Well, almost. Not to mention free pattern matching. And I wager that
the programs will be even shorter in Prolog, primarily because of these
considerations. I have written 100-line Prolog programs which were
previously coded as Pascal programs of 2000 lines.
Sorry, I just couldn't resist the chance to be obnoxious.
------------------------------
Date: Fri, 2 Dec 83 09:47 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: Lisp "productivity"
"A caveat: Lisp is very well suited to the nature of game programs.
A fair test would require that data processing and numerical analysis
problems be included in the mix of test problems."
A fair test of what? A fair test of which language yields the greatest
productivity when applied to the particular mix of test problems, I
would think. Clearly (deepfelt theological convictions to the contrary)
there is NO MOST-PRODUCTIVE LANGUAGE. It depends on the problem set; I
like structured languages so I do my scientific programming in Ratfor,
and when I had to do it in Pascal it was awful, but for a different type
of problem Pascal would be just fine.
Mark
------------------------------
Date: 30 Nov 83 22:49:51-PST (Wed)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Lisp Productivity & Anonymous Messages
Article-I.D.: uiucdcs.4245
The most incredible programming environment I have worked with to date is
that of InterLisp. The graphics-based trace and break packages on Xerox's
InterLisp-D (not to mention the Lisp editor, file package, and the
programmer's assistant) is, to say the least, addictive. Ease of debugging
has been combined with power to yield an environment in which program
development/debugging is easy, fast and productive. I think other languages
have a long way to go before someone develops comparable environments for
them. Of course, part of this is due to the language (i.e., Lisp) itself,
since programs written in Lisp tend to be easy to conceptualize and write,
short, and readable.
[I will pass this message along to the Arpanet AIList readers,
but am bothered by its anonymous authorship. This is hardly an
incriminating message, and I see no reason for the author to hide.
I do not currently reject anonymous messages out of hand, but I
will certainly screen them strictly. -- KIL]
------------------------------
Date: Thu 1 Dec 83 07:37:04-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Press Release RE: Dr. William A. Woods
[Reprinted from the SU-SCORE bboard.]
As of September 16, Chief Scientist directing all research in AI and related
technologies for Applied Expert Systems, Inc., Five Cambridge Center,
Cambridge, Mass 02142 (617)492-7322 net address Woods@BBND (same as before)
HL
------------------------------
Date: Fri, 2 Dec 83 09:57:14 PST
From: Adolfo Di-Mare <v.dimare@UCLA-LOCUS>
Subject: a new definition of intelligence
You're intelligence is directly proportional to the time it takes
you to bounce back after you're replaced by an <intelligent> computer.
As I'm not an economist, I won't argue on how intelligent we are...
Put in another way, is an expert that builds a machine that substitutes
him/er intelligent? If s/he is not, is the machine?
Adolfo
///
------------------------------
Date: 1 Dec 83 20:37:31-PST (Thu)
From: decvax!bbncca!jsol @ Ucb-Vax
Subject: Re: Halting Problem Discussion
Article-I.D.: bbncca.365
Can a method be formulated for deciding whether or not your are on the right
track? Yes. It's call interaction. Ask someone you feel you can trust about
whether or not you are getting anywhere, and to offer any advice to help you
get where you want to go.
Students do it all the time, they come to their teachers and ask them to
help them. Looping programs could decide that they have looped for as long
as they care to and reality check them. An algorithm to do this is available
if anyone wants it (read that to mean I will produce one).
--
[--JSol--]
JSol@Usc-Eclc/JSol@Bbncca (Arpa)
JSol@Usc-Eclb/JSol@Bnl (Milnet)
{decvax, wjh12, linus}!bbncca!jsol
------------------------------
From: Bibbero.PMSDMKT
Reply-to: Bibbero.PMSDMKT
Subject: Big Brother and Block Modeling, Warning
[Reprinted from the Human-Nets Digest.]
[This application of pattern recognition seems to warrant mention,
but comments on the desirability of such analysis should be directed
to Human-Nets@RUTGERS. -- KIL]
The New York Times (Nov 20, Sunday Business Section) carries a warning
from two Yale professors against a new management technique that can
be misused to snoop on personnel through sophisticted mathematical
analysis of communications, including computer network usage.
Professors Scott Boorman, a Yale sociologist, and Paul Levitt,
research mathematician at Yale and Harvard (economics) who authored
the article also invented the technique some years ago. Briefly, it
consists of computer-intensive analysis of personnel communications to
divide them into groups or "blocks" depending on who they communicate
with, whom they copy on messages, who they phone and who's calls don't
they return. Blocks of people so identified can be classified as
dissidents, potential traitors or "Young Turks" about to split off
their own company, company loyalists, promotion candidates and so
forth. "Guilt by association" is built into the system since members
of the same block may not even know each other but merely copy the
same person on memos.
The existence of an informal organization as a powerful directing
force in corporations, over and above the formal organization chart,
has been recognized for a long time. The block analysis method
permits and "x-ray" penetration of these informal organizations
through use of computer on-line analysis which may act, per the
authors, as "judge and jury." The increasing usage of electronic
mail, voice storage and forward systems, local networks and the like
make clandestine automation of this kind of snooping simple, powerful,
and almost inevitable. The authors cite as misusage evidence the high
degree of interest in the method by iron curtain government agencies.
An early success (late 60's) was also demonstrated in a Catholic
monastery where it averted organizational collapse by identifying
members as loyalists, "Young Turks," and outcasts. Currently,
interest is high in U.S. corporations, particularily the internal
audit departments seeking to identify dissidents.
As the authors warn, this revolution in computers and information
systems bring us closer to George Orwell's state of Oceania.
------------------------------
Date: 1 Dec 1983 1629-EST
From: ELIZA at MIT-XX
Subject: Seminar Announcement
[Reprinted from the MIT-AI bboard.]
Date: Wednesday, December 7th, l983
Time: Refreshments 3:30 P.M.
Seminar 3:45 P.M.
Place: NE43-512A (545 Technology Square, Cambridge)
PROGRAMS ARE PREDICATES
C. A. R. Hoare
Oxford University
A program is identified with the strongest predicate
which describes every observation that might be made
of a mechanism which executes the program. A programming
language is a set of programs expressed in a limited
notation, which ensures that they are implementable
with adequate efficiency, and that they enjoy desirable
algebraic properties. A specification S is a predicate
expressed in arbitrary mathematical notation. A program
P meets this specification if
P ==> S .
Thus a calculus for the derivation of correct programs
is an immediate corollary of the definition of the
language.
These theses are illustrated in the design of two simple
programming languages, one for sequential programming and
the other for communicating sequential processes.
Host: Professor John V. Guttag
------------------------------
Date: 12/02/83 09:17:19
From: ROSIE at MIT-ML
Subject: Expert Systems Seminar
[Forwarded by SASW@MIT-MC.]
DATE: Thursday, December 8, 1983
TIME: 2.15 p.m. Refreshments
2.30 p.m. Lecture
PLACE: NE43-AI Playroom
Explainable Expert Systems
Bill Swartout
USC/Information Sciences Institute
Traditional methods for explaining programs provide explanations by converting
the code of the program to English. While such methods can sometimes
adequately explain program behavior, they cannot justify it. That is, such
systems cannot tell why what the system is doing is reasonable. The problem
is that the knowledge required to provide these justifications was used to
produce the program but is itself not recorded as part of the code and hence
is unavailable. This talk will first describe the XPLAIN system, a previous
research effort aimed at improving the explanatory capabilities of expert
systems. We will then outline the goals and research directions for the
Explainable Expert Systems project, a new research effort just starting up at
ISI.
The XPLAIN system uses an automatic programmer to generate a consulting
program by refinement from abstract goals. The automatic programmer uses two
sources of knowledge: a domain model, representing descriptive facts about the
application domain, and a set of domain principles, representing
problem-solving knowledge, to drive the refinement process forward. As XPLAIN
creates an expert system, it records the decisions it makes in a refinement
structure. This structure is then used to provide explanations and
justifications of the expert system.
Our current research focuses on three areas. First, we want to extend the
XPLAIN framework to represent additional kinds of knowledge such as control
knowledge for efficient execution. Second, we want to investigate the
compilation process that moves from abstract to specific knowledge. While it
does seem that human experts compile their knowledge, they do not always use
the resulting specific methods. This may be because the specific methods
often contain compiled-in assumptions which are usually (but not always)
correct. Third, we intend to use the richer framework provided by XPLAIN for
enhanced knowledge acquisition.
HOST: Professor Peter Szolovits
------------------------------
End of AIList Digest
********************
∂05-Dec-83 0802 LAWS@SRI-AI.ARPA AIList Digest V1 #109
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Dec 83 08:01:52 PST
Date: Sun 4 Dec 1983 22:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #109
To: AIList@SRI-AI
AIList Digest Monday, 5 Dec 1983 Volume 1 : Issue 109
Today's Topics:
Expert Systems & VLSI - Request for Material,
Programming Languages - Productivity,
Editorial Policy - Anonymous Messages,
Bindings - Dr. William A. Woods,
Intelligence,
Looping Problem,
Pattern Recognition - Block Modeling,
Seminars - Programs as Predicates & Explainable Expert System
----------------------------------------------------------------------
Date: Sun, 4 Dec 83 17:59:53 PST
From: Tulin Mangir <tulin@UCLA-CS>
Subject: Request for Material
I am preparing a tutorial and a current bibliography, for IEEE,
of the work in the area of expert system applications to CAD and computer aided
testing as well as computer aided processing. Specific emphasis is
on LSI/VLSI design, testing and processing. I would like this
material to be as complete and as current as we can all make. So, if you
have any material in these areas that you would like me to include
in the notes, ideas about representation of structure, knowledge,
behaviour of digital circuits, etc., references you know of,
please send me a msg. Thanks.
Tulin Mangir <cs.tulin@UCLA-cs>
(213) 825-2692
825-4943 (secretary)
------------------------------
Date: 29 Nov 83 22:25:19-PST (Tue)
From: sri-unix!decvax!duke!mcnc!marcel@uiucdcs.UUCP (marcel )@CCA
Subject: Re: lisp productivity question - (nf)
Article-I.D.: uiucdcs.4197
And now a plug from the logic programming people: try prolog for easy
debugging. Though it may take a while to get used to its modus operandi,
it has one advantage that is shared by no other language I know of:
rule-based computing with a clean formalism. Not to mention the ease
of implementing concepts such as "for all X satisfying P(X) do ...".
The end of cumbersome array traversals and difficult boolean conditions!
Well, almost. Not to mention free pattern matching. And I wager that
the programs will be even shorter in Prolog, primarily because of these
considerations. I have written 100-line Prolog programs which were
previously coded as Pascal programs of 2000 lines.
Sorry, I just couldn't resist the chance to be obnoxious.
------------------------------
Date: Fri, 2 Dec 83 09:47 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: Lisp "productivity"
"A caveat: Lisp is very well suited to the nature of game programs.
A fair test would require that data processing and numerical analysis
problems be included in the mix of test problems."
A fair test of what? A fair test of which language yields the greatest
productivity when applied to the particular mix of test problems, I
would think. Clearly (deepfelt theological convictions to the contrary)
there is NO MOST-PRODUCTIVE LANGUAGE. It depends on the problem set; I
like structured languages so I do my scientific programming in Ratfor,
and when I had to do it in Pascal it was awful, but for a different type
of problem Pascal would be just fine.
Mark
------------------------------
Date: 30 Nov 83 22:49:51-PST (Wed)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Lisp Productivity & Anonymous Messages
Article-I.D.: uiucdcs.4245
The most incredible programming environment I have worked with to date is
that of InterLisp. The graphics-based trace and break packages on Xerox's
InterLisp-D (not to mention the Lisp editor, file package, and the
programmer's assistant) is, to say the least, addictive. Ease of debugging
has been combined with power to yield an environment in which program
development/debugging is easy, fast and productive. I think other languages
have a long way to go before someone develops comparable environments for
them. Of course, part of this is due to the language (i.e., Lisp) itself,
since programs written in Lisp tend to be easy to conceptualize and write,
short, and readable.
[I will pass this message along to the Arpanet AIList readers,
but am bothered by its anonymous authorship. This is hardly an
incriminating message, and I see no reason for the author to hide.
I do not currently reject anonymous messages out of hand, but I
will certainly screen them strictly. -- KIL]
------------------------------
Date: Thu 1 Dec 83 07:37:04-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Press Release RE: Dr. William A. Woods
[Reprinted from the SU-SCORE bboard.]
As of September 16, Chief Scientist directing all research in AI and related
technologies for Applied Expert Systems, Inc., Five Cambridge Center,
Cambridge, Mass 02142 (617)492-7322 net address Woods@BBND (same as before)
HL
------------------------------
Date: Fri, 2 Dec 83 09:57:14 PST
From: Adolfo Di-Mare <v.dimare@UCLA-LOCUS>
Subject: a new definition of intelligence
You're intelligence is directly proportional to the time it takes
you to bounce back after you're replaced by an <intelligent> computer.
As I'm not an economist, I won't argue on how intelligent we are...
Put in another way, is an expert that builds a machine that substitutes
him/er intelligent? If s/he is not, is the machine?
Adolfo
///
------------------------------
Date: 1 Dec 83 20:37:31-PST (Thu)
From: decvax!bbncca!jsol @ Ucb-Vax
Subject: Re: Halting Problem Discussion
Article-I.D.: bbncca.365
Can a method be formulated for deciding whether or not your are on the right
track? Yes. It's call interaction. Ask someone you feel you can trust about
whether or not you are getting anywhere, and to offer any advice to help you
get where you want to go.
Students do it all the time, they come to their teachers and ask them to
help them. Looping programs could decide that they have looped for as long
as they care to and reality check them. An algorithm to do this is available
if anyone wants it (read that to mean I will produce one).
--
[--JSol--]
JSol@Usc-Eclc/JSol@Bbncca (Arpa)
JSol@Usc-Eclb/JSol@Bnl (Milnet)
{decvax, wjh12, linus}!bbncca!jsol
------------------------------
From: Bibbero.PMSDMKT
Reply-to: Bibbero.PMSDMKT
Subject: Big Brother and Block Modeling, Warning
[Reprinted from the Human-Nets Digest.]
[This application of pattern recognition seems to warrant mention,
but comments on the desirability of such analysis should be directed
to Human-Nets@RUTGERS. -- KIL]
The New York Times (Nov 20, Sunday Business Section) carries a warning
from two Yale professors against a new management technique that can
be misused to snoop on personnel through sophisticted mathematical
analysis of communications, including computer network usage.
Professors Scott Boorman, a Yale sociologist, and Paul Levitt,
research mathematician at Yale and Harvard (economics) who authored
the article also invented the technique some years ago. Briefly, it
consists of computer-intensive analysis of personnel communications to
divide them into groups or "blocks" depending on who they communicate
with, whom they copy on messages, who they phone and who's calls don't
they return. Blocks of people so identified can be classified as
dissidents, potential traitors or "Young Turks" about to split off
their own company, company loyalists, promotion candidates and so
forth. "Guilt by association" is built into the system since members
of the same block may not even know each other but merely copy the
same person on memos.
The existence of an informal organization as a powerful directing
force in corporations, over and above the formal organization chart,
has been recognized for a long time. The block analysis method
permits and "x-ray" penetration of these informal organizations
through use of computer on-line analysis which may act, per the
authors, as "judge and jury." The increasing usage of electronic
mail, voice storage and forward systems, local networks and the like
make clandestine automation of this kind of snooping simple, powerful,
and almost inevitable. The authors cite as misusage evidence the high
degree of interest in the method by iron curtain government agencies.
An early success (late 60's) was also demonstrated in a Catholic
monastery where it averted organizational collapse by identifying
members as loyalists, "Young Turks," and outcasts. Currently,
interest is high in U.S. corporations, particularily the internal
audit departments seeking to identify dissidents.
As the authors warn, this revolution in computers and information
systems bring us closer to George Orwell's state of Oceania.
------------------------------
Date: 1 Dec 1983 1629-EST
From: ELIZA at MIT-XX
Subject: Seminar Announcement
[Reprinted from the MIT-AI bboard.]
Date: Wednesday, December 7th, l983
Time: Refreshments 3:30 P.M.
Seminar 3:45 P.M.
Place: NE43-512A (545 Technology Square, Cambridge)
PROGRAMS ARE PREDICATES
C. A. R. Hoare
Oxford University
A program is identified with the strongest predicate
which describes every observation that might be made
of a mechanism which executes the program. A programming
language is a set of programs expressed in a limited
notation, which ensures that they are implementable
with adequate efficiency, and that they enjoy desirable
algebraic properties. A specification S is a predicate
expressed in arbitrary mathematical notation. A program
P meets this specification if
P ==> S .
Thus a calculus for the derivation of correct programs
is an immediate corollary of the definition of the
language.
These theses are illustrated in the design of two simple
programming languages, one for sequential programming and
the other for communicating sequential processes.
Host: Professor John V. Guttag
------------------------------
Date: 12/02/83 09:17:19
From: ROSIE at MIT-ML
Subject: Expert Systems Seminar
[Forwarded by SASW@MIT-MC.]
DATE: Thursday, December 8, 1983
TIME: 2.15 p.m. Refreshments
2.30 p.m. Lecture
PLACE: NE43-AI Playroom
Explainable Expert Systems
Bill Swartout
USC/Information Sciences Institute
Traditional methods for explaining programs provide explanations by converting
the code of the program to English. While such methods can sometimes
adequately explain program behavior, they cannot justify it. That is, such
systems cannot tell why what the system is doing is reasonable. The problem
is that the knowledge required to provide these justifications was used to
produce the program but is itself not recorded as part of the code and hence
is unavailable. This talk will first describe the XPLAIN system, a previous
research effort aimed at improving the explanatory capabilities of expert
systems. We will then outline the goals and research directions for the
Explainable Expert Systems project, a new research effort just starting up at
ISI.
The XPLAIN system uses an automatic programmer to generate a consulting
program by refinement from abstract goals. The automatic programmer uses two
sources of knowledge: a domain model, representing descriptive facts about the
application domain, and a set of domain principles, representing
problem-solving knowledge, to drive the refinement process forward. As XPLAIN
creates an expert system, it records the decisions it makes in a refinement
structure. This structure is then used to provide explanations and
justifications of the expert system.
Our current research focuses on three areas. First, we want to extend the
XPLAIN framework to represent additional kinds of knowledge such as control
knowledge for efficient execution. Second, we want to investigate the
compilation process that moves from abstract to specific knowledge. While it
does seem that human experts compile their knowledge, they do not always use
the resulting specific methods. This may be because the specific methods
often contain compiled-in assumptions which are usually (but not always)
correct. Third, we intend to use the richer framework provided by XPLAIN for
enhanced knowledge acquisition.
HOST: Professor Peter Szolovits
------------------------------
End of AIList Digest
********************
∂05-Dec-83 1022 KJB@SRI-AI.ARPA This Thursday
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Dec 83 10:22:22 PST
Date: Mon 5 Dec 83 10:18:07-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: This Thursday
To: csli-friends@SRI-AI.ARPA
The Linguistics Department, with some cooperation from CSLI, is
sponsoring a symposium on conditionals this week, Thursday - Sat.
It begins at 2 pm Thursday. In order not to conflict with these
activities, CSLi will postpone this Thursday's activites, with
the exception of TINLunch, to next Thursday, Dec. 15. Sorry for
the late word.
For more information on the conditionals symposium, call the linguistics
department.
Jon Barwise
-------
∂05-Dec-83 1255 @MIT-MC:MINSKY%MIT-OZ@MIT-MC
Received: from MIT-MC by SU-AI with TCP/SMTP; 5 Dec 83 12:55:13 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 5 Dec 83 15:38-EST
Date: Mon, 5 Dec 1983 15:35 EST
Message-ID: <MINSKY.11973127886.BABYL@MIT-OZ>
From: MINSKY%MIT-OZ@MIT-MC.ARPA
To: phil-sci%mit-oz@MIT-MC
In-reply-to: Msg of 22 Nov 1983 12:53-EST from GAVAN%MIT-OZ at MIT-MC.ARPA
Please remove me from this list.
∂05-Dec-83 1332 ALMOG@SRI-AI.ARPA Reminder on why context wont go away
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Dec 83 13:32:37 PST
Date: 5 Dec 1983 1332-PST
From: Almog at SRI-AI
Subject: Reminder on why context wont go away
To: csli-friends at SRI-AI
Tomorrow, dec.6, we have the last meeting of the quarter.
Speaker: Ivan Sag. Time and Place: Ventura hall, 3.15 pm.
Next qurter we continue at the same spatio-temporal location. We have
a rather exciting topic (the analysis of discourse). Detailed information
on next term's theme and speakers will follow.
I attach an abstract of Sag's talk:
IVAN SAG
FORMAL SEMANTICS AND EXTRALINGUISTIC CONTEXT
This paper is a reaction to the suggestion that examples
like :
The ham sandwich at table 9 is getting restless.
[waiter to waiter] (due to G. Nunberg)
He porched the newspaper. (due to Clark and Clark)
threaten the enterprise of constructing a theory of
compositional aspects of literal meaning in natural languages.
With Kaplan's logic of demonstratives as a point of departure,
I develop a framework in which "transfers of sense" and
"transfers of reference" can be studied within a formal
semantic analysis. The notion of context is expanded to
include functions which transfer the interpretations of
subconstituents in such a way that compositional principles
can be maintained. The resulting approach distinguishes
two ways in which context affects interpretation: (1) in the
initial determination of "literal utterance meaning" and
(2) in the determination (say, in the Gricean fashion) of
"conveyed meaning".
-------
∂05-Dec-83 1342 @MIT-MC:MINSKY%MIT-OZ@MIT-MC
Received: from MIT-MC by SU-AI with TCP/SMTP; 5 Dec 83 13:39:35 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 5 Dec 83 15:38-EST
Date: Mon, 5 Dec 1983 15:35 EST
Message-ID: <MINSKY.11973127886.BABYL@MIT-OZ>
From: MINSKY%MIT-OZ@MIT-MC.ARPA
To: phil-sci%mit-oz@MIT-MC
In-reply-to: Msg of 22 Nov 1983 12:53-EST from GAVAN%MIT-OZ at MIT-MC.ARPA
Please remove me from this list.
∂05-Dec-83 1442 LENAT@SU-SCORE.ARPA topic for lunch discussion
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Dec 83 14:42:05 PST
Date: Mon 5 Dec 83 14:41:40-PST
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: topic for lunch discussion
To: faculty@SU-SCORE.ARPA
One topic which has recently caught my attention is the question of
where, in our programs, "meaning" resides. For instance, some of it
is encoded by the choice of data structures we make, some in comments
or separate documentation, some only in the eye of the user. I'm not sure
how much there is to say about this, but I can describe why it's turned out
to be an important issue in my own research on Machine Learning.
See you all at lunch.
Doug
-------
∂05-Dec-83 1529 GOLUB@SU-SCORE.ARPA Absence
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Dec 83 15:29:25 PST
Date: Mon 5 Dec 83 15:27:48-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Absence
To: faculty@SU-SCORE.ARPA
I shall be gone from Dec 7 until Dec 23. During my absence Jeff will
handle any URGENT administrative problems.
GENE
-------
∂05-Dec-83 1533 GOLUB@SU-SCORE.ARPA Meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Dec 83 15:33:33 PST
Date: Mon 5 Dec 83 15:32:02-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Meeting
To: CSD-Senior-Faculty: ;
We meet on Tuesday Dec 6 at 2:30. GENE
-------
∂05-Dec-83 1556 JF@SU-SCORE.ARPA Bell Fellowship
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Dec 83 15:56:33 PST
Date: Mon 5 Dec 83 15:55:55-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: Bell Fellowship
To: faculty@SU-SCORE.ARPA
cc: tajnai@SU-SCORE.ARPA
I am writing to you in my role as Fellowship Committee. As you probably
don't recall, one of the reasons I took this job was that I was appalled
at the nonchalance with which requests by IBM, Xerox, Bell, etc. for
fellowship nominations was treated last year and I think students (and hence
the whole department) get burned when they are told to get an application
and recommendations together yesterday. I am doing the best I can to see
that that doesnK't happen again this year. Carolyn Tajnai sent a message
to all of you saying that she needs 2 or 3 nominations for a Bell Fellowship
by December 12 (that's a week from today) and today she told me that she
hasn't had one response. Please, please respond. There will be ONE
fellowship awarded to a US citizen who is expected to graduate within 4
years. Now, I'm sure that each of you has a favorite student who's a US
citizen who will graduate within 4 years who could use some fellowship
support.
LET'S GET ON IT!!!!
thanks,
joan
-------
∂05-Dec-83 1606 @SU-SCORE.ARPA:WIEDERHOLD@SUMEX-AIM.ARPA Re: Bell Fellowship
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Dec 83 16:06:15 PST
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Mon 5 Dec 83 16:04:53-PST
Date: Mon 5 Dec 83 16:04:17-PST
From: Gio Wiederhold <WIEDERHOLD@SUMEX-AIM.ARPA>
Subject: Re: Bell Fellowship
To: JF@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA, tajnai@SU-SCORE.ARPA
In-Reply-To: Message from "Joan Feigenbaum <JF@SU-SCORE.ARPA>" of Mon 5 Dec 83 15:57:15-PST
I would nominate Peter Rathman. He is second year now,
has passed his comps except for the progr. project, and is looking
at version managemenr for optical disk.
Gio
-------
∂05-Dec-83 1628 TAJNAI@SU-SCORE.ARPA Re: Bell Fellowship
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Dec 83 16:28:35 PST
Date: Mon 5 Dec 83 16:27:14-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: Re: Bell Fellowship
To: WIEDERHOLD@SUMEX-AIM.ARPA, JF@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA
In-Reply-To: Message from "Gio Wiederhold <WIEDERHOLD@SUMEX-AIM.ARPA>" of Mon 5 Dec 83 16:04:59-PST
Peter Rathmann would make a good candidate. However, he already has an
NSF fellowship.
Carolyn
-------
∂05-Dec-83 1745 @SU-SCORE.ARPA:GENESERETH@SUMEX-AIM.ARPA Re: Call for Bell Fellowship Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Dec 83 17:43:36 PST
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Mon 5 Dec 83 17:40:11-PST
Date: Mon 5 Dec 83 17:39:49-PST
From: Michael Genesereth <GENESERETH@SUMEX-AIM.ARPA>
Subject: Re: Call for Bell Fellowship Nominations
To: TAJNAI@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA, JF@SU-SCORE.ARPA
In-Reply-To: Message from "Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>" of Wed 30 Nov 83 11:23:24-PST
Carolyn,
Are they any duties associated with a Bell fellowship, e.g.
going to work for them during the summer, or is it completely free?
mrg
-------
∂05-Dec-83 2150 KJB@SRI-AI.ARPA Conditionals Symposium
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Dec 83 21:50:30 PST
Date: Mon 5 Dec 83 21:46:21-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Conditionals Symposium
To: CSLI-folks@SRI-AI.ARPA
CSLI is bringing Hans Kamp, Richmond Thomason and Robert Stalnaker out
to take part in the conditionals symposium and to talk to people about
area B sort of stuff. Kamp arrived last night and is around Ventura
Hall. Stan Peters is arranging talks by Thomason and Stalnaker and will
send out a message as soon as details can be arranged.
-------
∂05-Dec-83 2301 KJB@SRI-AI.ARPA December 15
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Dec 83 23:00:53 PST
Date: Mon 5 Dec 83 22:55:33-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: December 15
To: csli-folks@SRI-AI.ARPA
Dear all,
The last regular CSLI activities for this year will be on Thursday,
December 15. The schedule will be sent out in this weeks Newsletter.
If it can be arranged, there will be a party that night for all the
CSLI folk and staff. Betsy and Joyce will make the final decision
as to whether this is practical and send out a message tomorrow,
Tuesday.
We have decided to keep Thursday CSLI day next quarter. However, since
the seminars will be more technical, it is not expected that everyone
will attend them. It is time to get down to work. The morning seminar
will be in area D. The afternoon will be a course on situation semantics.
TINLunch and the Colloquium series will continue as before.
We will soon have a lot more space at Ventura, though sitll not enough.
But as the top floor of Cassita is turned over to us and trailers are
brought in, the rather trying circumstances of this quarter will ease
some. I hope this will make Thursdays more pleasant for all, and make
people feel much more at home at Ventura on a day to day basis than has
been possible so far.
Burstall visit has been very usefull in focusing ideas on how to build
up area C, which is so vital if we are to suceed in the first year
review. I will be exploring some of his suggestions and discussing
them with those of you concerned with area c in the days to come.
I am writing this from home, so cannot edit this and correct typos I see
above. If I go into the editor, it brings everything to a screeching
halt on my PC, which I use as a terminal. Maybe someone knows why and
can tell me how to change it.
Fifteen Dandelions arrived at Ventura today. Some of these will be set
up soon in the old emlac room. It was very exciting to see them come
in the door. Another small step toward the Center we all imagine
existing some day.
Amos Tversky and I have prepared a joint statement describing the study
of cognition and information at Stanford, an umberella for both the
Sloan Program and for Program SL. This will be sent out as soon as
he runs it past the executive committee of the Sloan Program and I do
the same here. Those of you with accounts on SRI can read the draft
on my account <kjb>sloan and forward comments. It will be sent out
to everyone in the departments of c.s., phil, linguistics and psychology,
including the graduate students.
Feel free to call or come see me in the afternoons,
Jon
-------
∂05-Dec-83 2336 @SU-SCORE.ARPA:uucp@Shasta Re: Call for Bell Fellowship Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Dec 83 23:35:58 PST
Received: from Shasta by SU-SCORE.ARPA with TCP; Mon 5 Dec 83 23:35:01-PST
Received: from decwrl by Shasta with UUCP; Mon, 5 Dec 83 23:34 PST
Date: 5 Dec 1983 2054-PST (Monday)
Sender: uucp@Shasta
From: decwrl!baskett (Forest Baskett) <decwrl!baskett@Shasta>
Subject: Re: Call for Bell Fellowship Nominations
Message-Id: <8312060454.AA07207@DECWRL>
Received: by DECWRL (3.327/4.09) 5 Dec 83 20:54:47 PST (Mon)
To: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Cc: JF@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
In-Reply-To: Your message of Wed 30 Nov 83 11:20:42-PST.
<8311301933.AA06189@DECWRL>
I'm happy to nominate John Lamping, Kim McCall, Jeff Naughton, and
Billy Wilson as candidates for the Bell Fellowship (alphebetical order).
They are all students in CS 311 that stand out prominently above the
rest of a very large class of mostly first year students. They all
seem to have significant theoretical capabilities coupled with major
architectural and systems interests that would seem to make them good
candidates for a Bell Fellowship. At least two of them are not
currently holders of other fellowships. What do I do next? (A second
from this group to one of more of these nominations would be nice.)
Forest
∂06-Dec-83 0040 @SRI-AI.ARPA:PULLUM%HP-HULK.HP-Labs@Rand-Relay WCCFL DEADLINE
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Dec 83 00:40:24 PST
Received: from rand-relay.ARPA by SRI-AI.ARPA with TCP; Tue 6 Dec 83 00:05:44-PST
Date: 5 Dec 1983 1541-PST
From: PULLUM.HP-HULK@Rand-Relay
Return-Path: <PULLUM%HP-HULK.HP-Labs@Rand-Relay>
Subject: WCCFL DEADLINE
Received: by HP-VENUS via CHAOSNET; 5 Dec 1983 15:41:36-PST
To: csli-friends@SRI-AI
Message-Id: <439515698.29052.hplabs@HP-VENUS>
Via: HP-Labs; 5 Dec 83 23:29-PST
The deadline for abstracts for the third West Coast Conference on Formal
Linguistics (March 16-18, 1984, UC Santa Cruz) is approaching: abstracts
have to be received at Cowell College, UCSC, Santa Cruz, California 95064
by 5p.m. on Friday, December 16, 1983. The deadline will be a strict one;
abstracts will be distributed for consideration by the program committee
immediately, and abstracts received too late for distribution will not be
forwarded to the committee and will be returned unopened.
-------
∂06-Dec-83 0826 TAJNAI@SU-SCORE.ARPA Re: Call for Bell Fellowship Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Dec 83 08:26:12 PST
Date: Tue 6 Dec 83 08:25:58-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: Re: Call for Bell Fellowship Nominations
To: decwrl!baskett@SU-SHASTA.ARPA
cc: JF@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
In-Reply-To: Message from "decwrl!baskett (Forest Baskett) <decwrl!baskett@Shasta>" of Mon 5 Dec 83 20:54:00-PST
When the 3 candidates are chosen from the dept. I will notify them and
we'll start putting packets together. At that point they will need
letters of recommendation.
Wilson has an NSF fellowship.
Carolyn
-------
∂06-Dec-83 0905 PETERS@SRI-AI.ARPA Talk Wednesday
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Dec 83 09:05:21 PST
Date: Tue 6 Dec 83 09:03:39-PST
From: Stanley Peters <PETERS@SRI-AI.ARPA>
Subject: Talk Wednesday
To: csli-folks@SRI-AI.ARPA
cc: csli-b1@SRI-AI.ARPA
At a special meeting of Projects B1 and D4 tomorrow afternoon
Dr. Richmond Thomason
will speak on
"Accomodation, Conversational Planning and Implicature".
The talk will be in the Ventura Hall conference room at 2:00 p.m.
-------
∂06-Dec-83 1246 SCHMIDT@SUMEX-AIM.ARPA IMPORTANT LM-3600 WARNING
Received: from SUMEX-AIM by SU-AI with TCP/SMTP; 6 Dec 83 12:46:00 PST
Date: Tue 6 Dec 83 12:44:31-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: IMPORTANT LM-3600 WARNING
To: HPP-Lisp-Machines@SUMEX-AIM.ARPA, Welch-Road@SUMEX-AIM.ARPA
I hope Dick and Bud won't mind my redistributing this more widely, as it
is of extreme importance that everyone who gets within 20 ft of a 3600 be
aware of the danger detailed below. --Christopher
Date: 05 Dec 83 1445 PST
From: Dick Gabriel <RPG@SU-AI>
Subject: Computer Vandals
To: su-bboards@SU-AI
For the last few weeks we have been suffering from computer vandals
within the department. The perpetrator has been powering off the
3600 consoles in 433, and just the other day he unplugged the LM-2.
Once, after he powered down the 3600 consoles, he wrote a note which
I paraphrase:
If you object to these consoles being powered down, complain
to CSD-CF.
The perpetrator wishes to remain anonymous, but I would like that
individual to at least sign the name of the project he works for so that
we can bill that project for the damage done to the hardware. Because of
poor design, it is not safe to power the consoles on and off without
risking some chips. The consoles are labelled reflecting this fact. In
addition, even though the machines may look idle, someone located in the
basement on the 3600 down there could be using one or both remotely.
Posting this notice on the BBoards constitutes fair warning that powering
down the 3600 consoles can damage them. If you power them down again
I will regard that act exactly as if you took a sledgehammer to SCORE's
cpu. I doubt that the university would react well to a report of vandalism
by one of its employees or students.
-rpg-
[and from a separate communication]
If you power off the disk drives before the CPU, you have a 25% chance
of frying some ECL chips. If you power off the consoles, because the
tube is running at the extremes of its specs, it can also fry some chips.
Poor design, to be sure, but no excuse for vandalism.
-------
∂06-Dec-83 1618 GOLUB@SU-SCORE.ARPA vote on consulting professors
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Dec 83 16:18:08 PST
Date: Tue 6 Dec 83 16:17:26-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: vote on consulting professors
To: faculty@SU-SCORE.ARPA
There have been reservations by some of the faculty concerning
the consulting professorships. Therefore I am appointing a committee to
look into the appropriate action. Terry Winograd will chair the committee.
GENE
-------
∂06-Dec-83 1654 EMMA@SRI-AI.ARPA PARTY
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Dec 83 16:54:46 PST
Date: Tue 6 Dec 83 16:55:40-PST
From: EMMA@SRI-AI.ARPA
Subject: PARTY
To: csli-folks@SRI-AI.ARPA
* + +
~~~
+ * * * + +
+ ~~~~~~~ +
+ * * * * * + * + +
~~~~~~~~~~~ + *** +
* * * * * * * + + ***** +
~~~~~~~~~~~~~~~ * + *******
* * * * * * * * * * *** *********
~~~~~~~~~~~~~~~~~~~ *** ***** *********** +
+ * * * * * * * * * * * ***** ******* + ************* +
!!! + ******* ! + !!! +
',',',',''',',',',',',',',',','!',','',',',',',',',',',',',',',',',',',',',',',
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Holiday Potluck for CSLI Folks
Please come to a CSLI holiday get together on Thursday December 15
at 6:30 at room 1610 of the Oak Creek Apartments (directions will
be forthcoming). Bring a guest and some sort of Holiday food for
6 people -- anything from hors d'oeuvres to main dishes to after
dinner desserts and snacks. Wine, beer, and soft drinks and table
ware will be provided.
Please RSVP to EMMA@SRI-ai.
If you would like to make mulled wine or other holiday beverage,
CSLI will reimburse you for the ingredients. Let EMMA know
so we can plan accordingly for the other drinks. There will be
a stove and oven available.
Any contributions such as music or other entertainment would also
be greatly appreciated.
************%%%%%%%%%%%%*********%%%%%%%%%**********%%%%%%%%%***********%%%%%%%
-------
∂06-Dec-83 1656 BRODER@SU-SCORE.ARPA Last AFLB of 1983
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Dec 83 16:56:00 PST
Date: Tue 6 Dec 83 16:43:30-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Last AFLB of 1983
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
L A S T A F L B O F 1 9 8 3
This Thursday (8/12) it is the last AFLB of the year. It will be an
informal meeting. We shall go round the table and everyone will talk
a bit about her/his research. If this will not fill an hour some
people offered to present some short results.
AFLB will resume on January 12, 1984. The first speaker of the year
will be Dick Karp from U. C. Berkeley.
We need speakers for the winter quarter. Please volunteer now!
Happy Holidays,
Andrei
-------
∂07-Dec-83 0058 LAWS@SRI-AI.ARPA AIList Digest V1 #110
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Dec 83 00:57:12 PST
Date: Tue 6 Dec 1983 20:24-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #110
To: AIList@SRI-AI
AIList Digest Wednesday, 7 Dec 1983 Volume 1 : Issue 110
Today's Topics:
AI and Manufacturing - Request,
Bindings - HPP,
Programming Languages - Environments & Productivity,
Vision - Cultural Influences on Perception,
AI Jargon - Mental States of Machines,
AI Challange & Expert Systems,
Seminar - Universal Subgoaling
----------------------------------------------------------------------
Date: 5 Dec 83 15:14:26 EST (Mon)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: AI and Automated Manufacturing
I and some colleagues at University of Maryland are doing a literature
search on the use of AI techniques in Automated Manufacturing.
The results of the literature search will comprise a report to be
sent to the National Bureau of Standards as part of a research
contract. We'd appreciate any relevant information any of you may
have--especially copies of papers or technical reports. In
return, I can send you (on request) copies of some papers I have
published on that subject, as well as a copy of the literature
search when it is completed. My mailing address is
Dana S. Nau
Computer Science Dept.
University of Maryland
College Park, MD 20742
------------------------------
Date: Mon 5 Dec 83 08:27:28-PST
From: HPP Secretary <HPP-SECRETARY@SUMEX-AIM.ARPA>
Subject: New Address for HPP
[Reprinted from the SU-SCORE bboard.]
The HPP has moved. Our new address is:
Heuristic Programming Project
Computer Science Department
Stanford University
701 Welch Road, Bldg. C
Palo Alto, CA 94304
------------------------------
Date: Mon, 5 Dec 83 09:43:51 PST
From: Seth Goldman <seth@UCLA-CS>
Subject: Programming environments are fine, but...
What are all of you doing with your nifty, adequate, and/or brain-damaged
computing environments? Also, if we're going to discuss environments, it
would be more productive I think to give concrete examples of the form:
I was trying to do or solve X
Here is how my environment helped me OR
This is what I need and don't yet have
It would also be nice to see some issues of AIList dedicated to presenting
1 or 2 paragraph abstracts of current work being pursued by readers and
contributors to this list. How about it Ken?
[Sounds good to me. It would be interesting to know
whether progress in AI is currentlyheld back by conceptual
problems or just by the programming effort of building
large and user-friendly systems. -- KIL]
Seth Goldman
------------------------------
Date: Monday, 5 December 1983 13:47:13 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: marcel on "lisp productivity question"
I just thought I should mention that production system languages
share all the desirable features of Prolog mentioned in the previous
message, particularly being "rule-based computing with a clean formalism".
The main differences with the OPS family of languages is that OPS uses
primarily forward inference, instead of backwards inference, and a slightly
different matching mechanism. Preferring one over the other depends, I
suspect, on whether you think in terms of proofs or derivations.
------------------------------
Date: Mon, 5 Dec 83 10:23:17 pst
From: evans@Nosc (Evan C. Evans)
Subject: Vision & Such
Ken Laws in AIList Digest 1:99 states: an adequate answer [to
the question of why computers can't see yet] requires a guess
at how it is that the human vision system can work in all cases.
I cannot answer Ken's question, but perhaps I can provide some
useful input.
language shapes culture (Sapir-Whorf hypothesis)
culture shapes vision (see following)
vision shapes language (a priori)
The influence of culture on perception (vision) takes many forms.
A statistical examination (unpublished) of the British newspaper
game "Where's the ball?" is worth consideration. This game has
been appearing for some time in British, Australian, New Zealand,
& Fijian papers. So far as I know, it has not yet made its ap-
pearance in U.S. papers. The game is played thus:
A photograph of some common sport involving a ball is
published with the ball erased from the picture & the question,
where's the ball? Various members of the readership send in
their guesses & that closest to the ball's actual position in the
unmodified photo wins. Some time back the responses to several
rounds of this game were subjected to statistical analysis. This
analysis showed that there were statistically valid differences
associated with the cultural background of the participants.
This finding was particularly striking in Fiji with a resident
population comprising several very different cultural groups.
Ball placement by the different groups tended to cluster at sig-
nificantly different locations in the picture, even for a game
like soccer that was well known & played by all. It is unfor-
tunate that this work (not mine) has not been published. It does
suggest two things: a.) a cultural influence on vision & percep-
tion, & b.) a powerful means of conducting experiments to learn
more about this influence. For instance, this same research was
elaborated into various TV displays designed to discover where
children of various age groups placed an unseen object to which
an arrow pointed. The children responded enthusiastically to
this new TV game, giving their answers by means of a light pen.
Yet statistically significant amounts of data were collected ef-
ficiently & painlessly.
I've constructed the loop above to suggest that none of
the three: vision, language, & culture should be studied out of
context.
E. C. Evans III
------------------------------
Date: Sat 3 Dec 83 00:42:50-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Mental states of machines
Steven Gutfreund's criticism of John McCarthy is unjustified. I
haven't read the article in "Psychology Today", but I am familiar with
the notion put forward by JMC and condemned by SG. The question can
be put in simple terms: is it useful to attribute mental states and
attitudes to machines? The answer is that our terms for mental states
and attitudes ("believe", "desire", "expect", etc...) represent a
classification of possible relationships between world states and the
internal (inacessible) states of designated individuals. Now, for
simple individuals and worlds, for example small finite automata, it
is possible to classify the world-individual relationships with simple
and tractable predicates. For more complicated systems, however, the
language of mental states is likely to become essential, because the
classifications it provides may well be computationally tractable in
ways that other classifications are not. Remember that individuals of
any "intelligence" must have states that encode classifications of
their own states and those of other individuals. Computational
representations of the language of mental states seem to be the only
means we have to construct machines with such rich sets of states that
can operate in "rational" ways with respect to the world and other
individuals.
SG's comment is analogous to the following criticism of our use of the
terms like "execution", "wait" or "active" when talking about the
states of computers: "it is wrong to use such terms when we all know
that what is down there is just a finite state machine, which we
understand so well mathematically."
Fernando Pereira
------------------------------
Date: Mon 5 Dec 83 11:21:56-PST
From: Wilkins <WILKINS@SRI-AI.ARPA>
Subject: complexity of formal systems
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
They then resort to arcane languages and to attributing 'mental'
characteristics to what are basically fuzzy algorithms that have been applied
to poorly formalized or poorly characterized problems. Once the problems are
better understood and are given a more precise formal characterization, one
no longer needs "AI" techniques.
I think Professor McCarthy is thinking of systems (possibly not built yet)
whose complexity comes from size and not from imprecise formalization. A
huge AI program has lots of knowledge, all of it may be precisely formalized
in first-order logic or some other well understood formalism, this knowledge
may be combined and used by well understood and precise inference algorithms,
and yet because of the (for practical purposes) infinite number of inputs and
possible combinations of the individual knowledge formulas, the easiest
(best? only?) way to desribe the behavior of the system is by attributing
mental characteristics. Some AI systems approaching this complex already
exist. This has nothing to do with "fuzzy algorithms" or "poorly formalized
problems", it is just the inherent complexity of the system. If you think
you can usefully explain the practical behavior of any well-formalized system
without using mental characteristics, I submit that you haven't tried it on a
large enough system (e.g. some systems today need a larger address space than
that available on a DEC 2060 -- combining that much knowledge can produce
quite complex behavior).
------------------------------
Date: 28 Nov 83 3:10:20-PST (Mon)
From: harpo!floyd!clyde!akgua!sb1!sb6!bpa!burdvax!sjuvax!rbanerji@Ucb-
Vax
Subject: Re: Clarifying my "AI Challange"
Article-I.D.: sjuvax.157
[...]
I am reacting to Johnson, Helly and Dietterich. I really liked
[Ken Laws'] technical evaluation of Knowledge-based programming. Basically
similar to what Tom also said in defense of Knowledge-based programming
but KIL said it much clearer.
On one aspect, I have to agree with Johnson about expert systems
and hackery, though. The only place there is any attempt on the part of
an author to explain the structure of the knowledge base(s) is in the
handbook. But I bet that as the structures are changed by later authors
for various justified and unjustified reasons, they will not be clearly
explained except in vague terms.
I do not accept Dietterich's explanation that AI papers are hard
to read because of terminology; or because what they are trying to do
are so hard. On the latter point, we do not expect that what they are
DOING be easy, just that HOW they are doing it be clearly explained:
and that the definition of clarity follow the lines set out in classical
scientific disciplines. I hope that the days are gone when AI was
considered some sort of superscience answerable to none. On the matter
of terminology, papers (for example) on algebraic topology have more
terminology than AI: terminology developed over a longer period of time.
But if one wants to and has the time, he can go back, back, back along
lines of reference and to textbooks and be assured he will have an answer.
In AI, about the only hope is to talk to the author and unravel his answers
carefully and patiently and hope that somewhere along the line one does not
get "well, there is a hack there..it is kind of long and hard to explain:
let me show you the overall effect"
In other sciences, hard things are explained on the basis of
previously explained things. These explanantion trees are much deeper
than in AI; they are so strong and precise that climbing them may
be hard, but never hopeless.
I agree with Helly in that this lack is due to the fact that no
attempt has been made in AI to have workers start with a common basis in
science, or even in scientific methodology. It has suffered in the past
because of this. When existing methods of data representation and processing
in theorem proving was found inefficient, the AI culture developed this
self image that its needs were ahead of logic: notwithstanding the fact
that the techniques they were using were representable in logic and that
the reason for their seeming success was in the fact that they were designed
to achieve efficiency at the cost (often high) of flexibility. Since
then, those words have been "eaten": but at considerable cost. The reason
may well be that the critics of logic did not know enough logic to see this.
In some cases, their professors did--but never cared to explain what the
real difficulty in logic was. Or maybe they believed their own propaganda.
This lack of uniformity of background came out clear when Tom said
that because of AI work people now clearly understood the difference between
the subset of a set and the element of a set. This difference has been well
known at least since early this century if not earlier. If workers in AI
did not know it before, it is because of their reluctance to know the meaning
of a term before they use it. This has also often come from their belief
that precise definitions will rob their terms of their richness (not realising
that once they have interpreted their terms by a program, they have a precise
definition, only written in a much less comprehensible way: set theorists
never had any difficulty understanding the diffeence between subsets and
elements). If they were trained, they would know the techniques that are
used in Science for defining terms.
I disagree with Helly that Computer Science in general is unscientific.
There has always been a precise mathematical basis of Theorem proving (AI,
actually) and in computation and complexity theory. It is true, however, that
the traditional techniques of experimental research have not been used in
AI at all: people have tried hard to use it in software, but seem to
be having difficulties.
Would Helly disagree with me if I say that Newell and Simon's work
in computer modelling of psychological processes have been carried out
with at least the amount of scientific discipline that psychologists use?
I have always seen that work as one of the success stories in AI. And
at least some psychologists seem to agree.
I agree with Tom that AI will have to keep going even if someone
proves that P=NP. The reason is that many AI problems are amenable to
N↑2 methods already: except that N is too big. In this connection I have
a question, in case someone can tell me. I think Rabin has a theorem
that given any system of logic and any computable function, there is
a true statement which takes longer to prove than that function predicts.
What does this say about the relation between P and NP, if anything?
Too long already!
..allegra!astrovax!sjuvax!rbanerji
------------------------------
Date: 1 Dec 83 13:51:36-PST (Thu)
From: decvax!duke!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Expert Systems
Article-I.D.: ncsu.2420
Are expert systems new? Different? Well, how about an example. Time
was, to run a computer system, one needed at least one operator to care
and feed for the system. This is increasingly handled by sophisticated
operating systems. As such is an operating system an "expert system"?
An OS is usually developed using a style of programming which is quite
different from those of wimpy, unskilled, un-enlightenned applications
programmers. It would be very hard to build an operating system in the
applications style. (I claim). The people who developed the style and
practice it to build systems are not usually AI people although I would
wager the presonality profiles would be quite similar.
Now, that is I think a major point. Are there different type of people in
Physics as compared to Biology? I would say so, having seen some of each.
Further, biologists do research in ways that seem different (again, this is
purely idiosynchratic evidence) differently than physists. Is it that one
group know how to do science better, or are the fields just so differnt,
or are the people attracted to each just different?
Now, suppose a team of people got together and built an expert system which
was fully capable of taking over the control of a very sophisticated
(previously manual, by highly trained people) inventory, billing and
ordering system. I claim that this is at least as complex as diagnosis
of and dosing of particular drugs (e.g. mycin). My expert system
was likely written in Cobol by people doing things in quite different ways
from AI or systems hackers.
One might want to argue that the productivity was much lower, that the
result was harder to change and so on. I would prefer to see this in
Figures, on proper comparisons. I suspect that the complexity of the
commercial software I mentioned is MUCH greater than the usual problem
attacked by AI people, so that the "productivity" might be comparable,
with the extra time reflecting the complexity. For example, designing
the reports and generating them for a large complex system (and doing
a good job) may take a large fraction of the total time, yet such
reporting is not usually done in the AI world. Traces of decisions
and other discourse are not the same. The latter is easier I think, or
at least it takes less work.
What I'm getting at is that expert systems have been around for a long
time, its only that recently AI people have gotten in to the arena. There
are other techniques which have been applied to developing these, and
I am waiting to be convinced that the AI people have a priori superior
strategies. I would like to be so convinced and I expect someday to
be convinced, but then again, I probably also fit the AI personality
profile so I am rather biased.
----GaryFostel----
------------------------------
Date: 5 Dec 1983 11:11:52-EST
From: John.Laird at CMU-CS-ZOG
Subject: Thesis Defense
[Reprinted from the CMU-AI bboard.]
Come see my thesis defense: Wednesday, December 7 at 3:30pm in 5409 Wean Hall
UNIVERSAL SUBGOALING
ABSTRACT
A major aim of Artificial Intelligence (AI) is to create systems that
display general problem solving ability. When problem solving, knowledge is
used to avoid uncertainty over what to do next, or to handle the
difficulties that arises when uncertainity can not be avoided. Uncertainty
is handled in AI problem solvers through the use of methods and subgoals;
where a method specifies the behavior for avoiding uncertainity in pursuit
of a goal, and a subgoal allows the system to recover from a difficulty once
it arises. A general problem solver should be able to respond to every task
with appropriate methods to avoid uncertainty, and when difficulties do
arise, the problem solver should be able to recover by using an appropriate
subgoal. However, current AI problem solver are limited in their generality
because they depend on sets of fixed methods and subgoals.
In previous work, we investigated the weak methods and proposed that a
problem solver does not explicitly select a method for goal, with the
inherent risk of selecting an inappropriate method. Instead, the problem
solver is organized so that the appropriate weak method emerges during
problem solving from its knowledge of the task. We called this organization
a universal weak method and we demonstrated it within an architecture,
called SOAR. However, we were limited to subgoal-free weak methods.
The purpose of this thesis is to a develop a problem solver where subgoals
arise whenever the problem solver encounters a difficulty in performing the
functions of problem solving. We call this capability universal subgoaling.
In this talk, I will describe and demonstrate an implementation of universal
subgoaling within SOAR2, a production system based on search in a problem
space. Since SOAR2 includes both universal subgoaling and a universal weak
method, it is not limited by a fixed set of subgoals or methods. We provide
two demonstrations of this: (1) SOAR2 creates subgoals whenever difficulties
arise during problem solving, (2) SOAR2 extends the set of weak methods that
emerge from the structure of a task without explicit selection.
------------------------------
End of AIList Digest
********************
∂07-Dec-83 1406 DKANERVA@SRI-AI.ARPA Room change for Thursday Conditionals Symposium
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Dec 83 14:02:49 PST
Date: Wed 7 Dec 83 13:53:49-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Room change for Thursday Conditionals Symposium
To: csli-friends@SRI-AI.ARPA
Tom Wasow would like people to note a change in the place where the
Thursday afternoon meeting of the Conditionals Symposium will be held.
FOR THURSDAY AFTERNOON ONLY, the Conditionals Symposium will be held
in the Forum Room of Meyer Library. This is the 2:00-5:00 session
on "Preliminary Definitions and Distinctions."
All other sessions will be held as scheduled in CERAS, Room 112.
Please check with the Stanford Linguistics Department if you have
any questions.
-------
∂07-Dec-83 1917 DKANERVA@SRI-AI.ARPA Newsletter No. 12, December 8, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Dec 83 19:16:37 PST
Date: Wed 7 Dec 83 17:51:25-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter No. 12, December 8, 1983
To: csli-friends@SRI-AI.ARPA
cc: Outside-newsletter: ;
CSLI Newsletter
December 8, 1983 * * * Number 12
END-OF-QUARTER ACTIVITIES AND CHANGES IN SCHEDULE
The Linguistics Department, with some cooperation from CSLI, is
sponsoring a symposium on conditionals and cognitive processes this
week, Thursday through Saturday, as announced in the November 10
Newsletter (No. 8). The Symposium begins at 2:00 today (Dec. 8). In
order not to conflict with these activities, CSLI will postpone this
Thursday's activities, with the exception of TINLunch, to next
Thursday, December 15--sorry for the late word. Please note that, for
Thursday afternoon only, the Conditionals Symposium will be held in
the Forum Room of Meyer Library; all other sessions will be held as
scheduled in CERAS, Room 112. Please check with the Stanford
Linguistics Department if you have any questions.
The last regular CSLI activities for this year will be on
Thursday, December 15. The schedule is given on page 3 of this
Newsletter.
We have decided to keep Thursday as CSLI day next quarter.
However, the seminars will be more technical than they were this
quarter. The morning seminar will be in area D; the afternoon session
will be a course on situation semantics. TINLunch and the Colloquium
series will continue as before.
We will soon have a lot more space at Ventura, though still not
enough. But as the top floor of Casita is turned over to us and
trailers are brought in, the rather trying circumstances of this
quarter will ease some. I hope this will make Thursdays more pleasant
for all, and make people feel much more at home at Ventura on a
day-to-day basis than has been possible so far.
Fifteen Dandelions have arrived at Ventura. Some of these will
be set up soon in the old IMLAC room. It was very exciting to see
them come in the door. Another small step toward the Center we all
imagine existing some day.
- Jon Barwise
* * * * * * *
SCHEDULE OF VISITORS
Hans Kamp (Bedford College, London), Richmond Thomason
(University of Pittsburgh), and Robert Stalnaker (Cornell University)
are visiting CSLI in conjunction with their participation in the
Symposium on Conditionals and Cognitive Processes, December 8-11,
sponsored by the Stanford Linguistics Department. They will be giving
talks, the times and titles of which will be announced later.
* * * * * * *
! Page 2
* * * * * * *
CSLI NATURAL LANGUAGE SEMINAR NEXT WEEK, DECEMBER 15
Martin Kay, of Xerox PARC, will be speaking on "Unification" at
next Thursday's Natural Language Seminar, 10:00 a.m., in Redwood Hall,
room G-19.
Abstract: "Unification" keeps coming up around CSLI. Now you can
discover what it is and why it is the greatest thing since dative
movement. I will show how this neat little idea, with an occasional
extension this way or that, unifies such diverse things as PROLOG,
LFG, GPSG, phonology, and GSPG, and how Functional Unification Grammar
beats all of them six ways to Christmas. A familiarity with the basic
philosophy of the reverend Moon will be assumed.
* * * * * * *
CSLI PROJECTS C1/D1 MEETING
THE SEMANTICS OF COMPUTER LANGUAGES GROUP
Speaker: Joe Halpern (IBM San Jose)
Title: "From Denotational to Operational and Axiomatic Semantics"
Time: Tuesday, December 13th, 9:30-11:30
Place: Xerox PARC, room 1500
Abstract: We discuss how to give denotational semantics to an
ALGOL-like language with procedure parameters, blocks, and sharing,
but without function procedures by translating programs into typed
lambda-calculus. Difficulties arise in making semantic sense out of
the notion of a "new" location. We suggest a way of doing so by
introducing the notion of a store model, in which can be used to model
local storage allocation for blocks. Using these ideas we show how to
construct a (relatively) complete axiom system for our language.
Visitors should arrive at 9:25 a.m. at the lower-level employees'
entrance, where they will be issued red badges before entering the
premises.
* * * * * * *
CSLI SCHEDULE FOR *THIS* THURSDAY, December 8th, 1983
PLEASE NOTE:
Today's activities, except for TINLunch, have been postponed to
next Thursday, December 15, to avoid conflict with the Symposium on
Conditionals and Cognitive Processes being sponsored by the Stanford
Linguistics Department starting today, December 8, and continuing
through Saturday, December 11.
12:00 TINLunch
Discussion leader: Robert C. Moore, SRI
Paper for discussion: "Cognitive Wheels: The Frame Problem of AI"
by Daniel Dennett
Place: Ventura Hall
* * * * * * *
! Page 3
* * * * * * *
CSLI SCHEDULE FOR *NEXT* THURSDAY, December 15th, 1983
10:00 Research Seminar on Natural Language
Speaker: Martin Kay (Xerox-CSLI)
Title: "Unification"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Ray Perrault
Paper for discussion: "On Time, Tense, and Aspect: An Essay
in English Metaphysics"
by Emmon Bach
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speakers: Fernando Pereira and Stuart Shieber (CSLI)
Title: "Feature Systems and Their Use in Grammars"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Richard Waldinger (SRI)
Title: "Deductive Program Synthesis Research"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available
in a lot located just off Campus Drive, across from the construction
site.
* * * * * * *
TINLUNCH SCHEDULE
December 15 C. Raymond Perrault
December 22 & 29 Christmas Vacation
January 5 Fernando Pereira
January 12 Marsha Bush
January 19 John Perry
January 26 Stanley Peters
* * * * * * *
! Page 4
* * * * * * *
PROJECTS B1 AND D4 MEETING
At a special meeting of Projects B1 and D4 in Ventura Hall at
2:00 p.m., Wednesday, December 7, Dr. Richmond Thomason spoke on
"Accommodation, Conversational Planning, and Implicature."
* * * * * * *
WHY CONTEXT WON'T GO AWAY
On Wednesday, December 6, the last meeting of the quarter was
held with Ivan Sag speaking on "Formal Semantics and Extralinguistic
Context." Next quarter, we will continue at the same spatio-temporal
location. We have a rather exciting topic: the analysis of discourse.
Detailed information on next term's theme and speakers will be
provided soon. Given below is the abstract of Sag's talk.
"Formal Semantics and Extralinguistic Context"
by Ivan Sag
This paper is a reaction to the suggestion that examples like
The ham sandwich at table 9 is getting restless.
[waiter to waiter] (due to G. Nunberg)
He porched the newspaper. (due to Clark and Clark)
threaten the enterprise of constructing a theory of compositional
aspects of literal meaning in natural languages. With Kaplan's logic
of demonstratives as a point of departure, I develop a framework in
which "transfers of sense" and "transfers of reference" can be studied
within a formal semantic analysis. The notion of context is expanded
to include functions which transfer the interpretations of
subconstituents in such a way that compositional principles can be
maintained. The resulting approach distinguishes two ways in which
context affects interpretation: (1) in the initial determination of
"literal utterance meaning" and (2) in the determination (say, in the
Gricean fashion) of "conveyed meaning".
* * * * * * *
LINGUISTICS DEPARTMENT COLLOQUIA
Tuesdays, 15:15 p.m., room to be announced.
December 13: Beatriz Lavandera, formerly of the Stanford Linguistics
Department and now Senior Researcher at the Consejo Nacional de
Investigaciones Cientificas y Tecnicas, Buenos Aires, Argentina will
speak on "Between Personal and Impersonal in Spanish Discourse".
January 10: Carol Neidle of the Department of Modern Languages, Boston
University, will speak on a topic in syntax to be determined.
January 31: Francisca Sanchez, a doctoral candidate in the Stanford
Linguistics Department, will present a dissertation proposal entitled
"A Sociolinguistic Study of Chicano Spanish"
* * * * * * *
! Page 5
* * * * * * *
COMPUTER SCIENCE COLLOQUIUM NOTICE WEEK OF 12/5/83-12/9/83
12/05/1983 Numerical Analysis Seminar
Monday Germund Dahlquist
4:15 Stanford University & KTH/Stockholm
Math 380C Some Matrix Questions Related to ODE's:Part I
12/06/1983 Medical Computing Journal Club
Tuesday Greg Cooper
1:30 - 2:30 Stanford
TC135 Medical Center Review of "Characteristics of Clinical Information
Searching"
12/06/1983 Knowledge Representation Group Seminar
Tuesday Stephen Westfold
2:30-3:30 Stanford and Kestrel Institute
TC-135 (Med School) Building Knowledge Bases as Programming
12/06/1983 CS Colloquium
Tuesday Keith Lantz
4:15 CS Dept. Stanford U.
Terman Aud. Virtual Terminals and Network Graphics in
Workstation-Based Distributed Systems
12/07/1983 Talkware Seminar
Wednesday Donald Knuth
2:15-4:00 Stanford U. CS Dept.
380Y (Math Corner) On the Design of Programming Languages
12/07/1983 EE380/CS310 Computer Forum Seminar
Wednesday Larry Stewart
4:15 Xerox PARC
Skilling Aud. Etherphone-Ethernet Voice Service
Thursday Postponed until 12/15/1983
4:15
Redwood Hall Rm G-19
12/08/1983 Supercomputer Seminar
Thursday Bob Keller
4:15 - 5:15 Utah/Livermore
200-034
12/09/1983 Database Research Seminar
Friday Tom Munnecke
3:15 - 4:30 Veteran's Administration
MJH 352 Occam's Razor is Alive and Well into the Veteran's
Administration
(Databases and Communication)
* * * * * * *
! Page 6
* * * * * * *
SEMINAR - INTELLIGENT TUTORING SYSTEMS
Seminar on Intelligent Tutoring Systems to be given next quarter
by Derek Sleeman on Wednesdays 4-6. For more details please see
<SLEEMAN>ITS.PRESS on Sumex, mail to SLEEMAN@SUMEX or call 73257.
* * * * * * *
WCCFL DEADLINE
The deadline for abstracts for the third West Coast Conference on
Formal Linguistics (March 16-18, 1984, UC Santa Cruz) is approaching:
Abstracts have to be received at Cowell College, UCSC, Santa Cruz,
California 95064 by 5 p.m. on Friday, December 16, 1983. The deadline
will be a strict one; abstracts will be distributed for consideration
by the program committee immediately, and abstracts received too late
for distribution will not be forwarded to the committee and will be
returned unopened.
- Geoff Pullum
* * * * * * *
IFIP WORKSHOP IN BRISTOL
IFIP Workshop on Hardware Supported Implementation
of Concurrent Languages in Distributed Systems
University of Bristol, UK, 26-28 March 1984
Due to a postal strike in the Netherlands, the program committee
chairman has not been able to receive any correspondence for several
weeks. If you mailed an abstract or short contribution to Professor
Reijns, at the University of Delft, he would like you to send another
copy to the local organizing committee:
Professor Erik Dagless or Dr. Michael Barton
Department of Electrical and Electronic Engineering
University of Bristol, Bristol BS8 1TR, U.K.
If you missed the deadline for submitting the abstract for a
proposed contribution to the workshop, you may still send one.
* * * * * * *
-------
∂08-Dec-83 0713 KJB@SRI-AI.ARPA A.S.L.
Received: from SRI-AI by SU-AI with TCP/SMTP; 8 Dec 83 07:13:22 PST
Date: Thu 8 Dec 83 07:06:43-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: A.S.L.
To: csli-folks@SRI-AI.ARPA, meseguer@SRI-AI.ARPA, goguen@SRI-AI.ARPA
I plan to propose to the Association of Symbolic Logic that they have
a summer school and meeting here in 1985 on "Logic, language and computation"
By "language" here, I mean, of course, situated languages, hatural and
computer. I need to suggest a number of people in the area that might
serve on the program committee that are members of the ASL. Would you
let me know if you are one. I promise to keep the work of this committee
to an absolute minimum, if the ASL will go along.
Jon
-------
∂08-Dec-83 1227 YAO@SU-SCORE.ARPA Library hours
Received: from SU-SCORE by SU-AI with TCP/SMTP; 8 Dec 83 12:27:33 PST
Date: Thu 8 Dec 83 12:22:52-PST
From: Andrew Yao <YAO@SU-SCORE.ARPA>
Subject: Library hours
To: students@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
As in other parts of the University, the libraries have been asked to make
cuts in budgets. One idea proposed was to have the Math/CS library open
at 1:00 pm on saturdays instead of 10:00am. These hours seen to be the lowest
use period. The director of the Math/CS library is interested in feedback
from us. So please let me know if you have any comments on this subject.
Yao
-------
∂08-Dec-83 1300 LIBRARY@SU-SCORE.ARPA Speed Processed Books with call #'s like 83-001326
Received: from SU-SCORE by SU-AI with TCP/SMTP; 8 Dec 83 12:57:14 PST
Date: Thu 8 Dec 83 12:56:32-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Speed Processed Books with call #'s like 83-001326
To: su-bboards@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA
We are now receiving books which are being "speed processed" through the
system and which to not have Library of Congress Classification call
numbers. These books are searchable online or by title only in the manual
card catalog. If you are referred to a book with a number that begins
with 83 (as of January 1st these books will begin with 84) you can find
them shelved at the beginning of the periodical index section right above
Compu/Math. This procedure is to allow the books to be sent to the branches
where they are accessible instead of having them sit in the catalog
department waiting to have original cataloging. Eventually these books
will be cataloged and added to the LC section. As in the past, if you have
difficulty locating something be sure to ask a staff member to help you.
HL
-------
∂08-Dec-83 1640 WUNDERMAN@SRI-AI.ARPA Friday Phone Calls to Ventura
Received: from SRI-AI by SU-AI with TCP/SMTP; 8 Dec 83 16:39:52 PST
Date: Thu 8 Dec 83 16:36:24-PST
From: WUNDERMAN@SRI-AI.ARPA
Subject: Friday Phone Calls to Ventura
To: CSLI-FRIENDS@SRI-AI.ARPA
cc: Wunderman@SRI-AI.ARPA
Dear Friends,
On Friday mornings from 8:30-9:30, the staff at CSLI hold a weekly meeting in
the Ventura Conference Room. During that time, our office phones are not
covered, but if you have an emergency, please call the lobby phone: 497-0628
and let it ring long enough for one of us to reach the desk. Thanks for your
cooperation!
--Pat Wunderman
-------
∂08-Dec-83 1707 RIGGS@SRI-AI.ARPA SPECIAL MONDAY TALK
Received: from SRI-AI by SU-AI with TCP/SMTP; 8 Dec 83 17:07:44 PST
Date: Thu 8 Dec 83 17:02:07-PST
From: RIGGS@SRI-AI.ARPA
Subject: SPECIAL MONDAY TALK
To: CSLI-Folks@SRI-AI.ARPA
cc: Etchemendy@SRI-KL.ARPA
John Perry would like to schedule a talk by Bob Stalnaker
at noon on Mon., Dec. 12 entitled "Problems with De Re Belief".
This would be a meeting especially having to do with Project B.2
"Semantics of Sentences about Mental States". The talk would
be held in the Ventura Conference Room.
Before scheduling this meeting, we would like to know if
any of you are aware of any conflicts regarding this time slot.
As we are getting a very late start arranging the talk, please
reply regarding conflicts as soon as possible so we can notify
everyone before it occurs.
-------
∂08-Dec-83 2047 @MIT-MC:RICKL%MIT-OZ@MIT-MC Model Theoretic Ontologies
Received: from MIT-MC by SU-AI with TCP/SMTP; 8 Dec 83 20:47:03 PST
Date: Thu 8 Dec 83 23:44:24-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Model Theoretic Ontologies
To: dam%MIT-OZ@MIT-MC.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
Date: Tue, 29 Nov 1983 15:43 EST
From: DAM@MIT-OZ
Subject: Model Theoretic Ontologies
Whether or not "logic" in itself has a rich ontology depends
on what one means by "logic". I take "a logic" to consist of two
things: a set of models and a set of propositions, where each
proposition is associated with a truth function on models....
As we've discussed, I think that there is a problem here if the real
world is taken to be the model (as we would want to do in science).
This is that the truth function requires a mapping from the terms in
the set of propositions into entities in the real world, in order to
compute the truth value of a proposition. One must be able to
*recognize* instances (in the world) of the entities asserted by the
theory to exist. I don't think that you mean to include the
truth-function itself as part of your definition of "logic" (correct
me if I'm wrong), at least for those cases where we take the model to
be the world and the propositions to be a scientific theory about the
world. I am also highly dubious that it is reasonable to express the
truth-function directly in the logic it is supposed to be a
truth-function about (you did not claim this, though).
-=*=- rick
-------
∂08-Dec-83 2056 @MIT-MC:RICKL%MIT-OZ@MIT-MC Model Theoretic Ontologies
Received: from MIT-MC by SU-AI with TCP/SMTP; 8 Dec 83 20:56:29 PST
Date: Thu 8 Dec 83 23:50:14-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Model Theoretic Ontologies
To: batali%MIT-OZ@MIT-MC.ARPA, crummer@AEROSPACE.ARPA
cc: phil-sci%MIT-OZ@MIT-MC.ARPA
John had a good point earlier:
From: John Batali <Batali at MIT-OZ>
It sounds like the claim is that Tarskian semantics ALLOWS
for arbitrarily rich ontologies. But to really get representation
right, we have to HAVE an adequately rich ontology.
and to have a rich ontology we must first discover it, and then be able
to recognize items again when we run across them again.
This is hard enough with medium-size, observable physical objects like
cows & horses (remember the discussion on natural kinds last spring?
which only seemed to conclude that some people liked the notion, and
others objected violently), but gets worse for unobservable theoretical
terms like "electron" (which must also be in your ontology of science).
Nor does it work to try to simply enumerate the attributes:
("mammals" is not a model, of course, but Charlie's intention is clear)
From: Charlie Crummer <crummer@AEROSPACE>
In re: "Dogs are mammals."
If this statement assumes the existence of the model "mammals" then it
calls for a comparison of the attributes of the set "dogs" with the attributes
comprising the model "mammals". If the attributes match (the mammalness
attributes), then the statement can be said to be true.
For example the duck-billed platypus, which failed to match the mammal
attributes accepted by scientists of the time (e.g. live birth &
others). Rather than saying that the statement "Duck-billed platypii
are mammals" is false, the mammal attributes were revised. Categories
are only adhered to as long as they are useful, a fortiori attribute lists.
If the statement is a declaration intended to define (create) the model
"mammals" then the "intersection" (forgive me, set theorists) of all the
attributes of the examples used to define the model, e.g. "Whales are mammals;
Bats are mammals; etc., serves as the definition of the model "mammals".
The classic counter-example being "games", the "intersection" of all of the
attributes of all particular examples of games being nil.
-=*=- rick
-------
∂09-Dec-83 1048 TAJNAI@SU-SCORE.ARPA Computer Forum dates
Received: from SU-SCORE by SU-AI with TCP/SMTP; 9 Dec 83 10:40:00 PST
Date: Fri 9 Dec 83 08:24:03-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: Computer Forum dates
To: Faculty@SU-SCORE.ARPA, secretaries@SU-SCORE.ARPA
The Sixteenth Computer Forum Annual Meeting will be held
Wednesday/Thursday, February 8/9, 1984.
An informal buffet supper will be held at the Faculty Club the
evening of Tuesday, Feb. 7 -- from 6 to 8.
A preliminary program will be out by December 15.
Carolyn Tajnai
-------
∂09-Dec-83 1156 EMMA@SRI-AI.ARPA CSLI Directory
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Dec 83 11:56:24 PST
Date: Fri 9 Dec 83 11:55:00-PST
From: *<emma>directory.txt
Subject: CSLI Directory
Sender: EMMA@SRI-AI.ARPA
To: csli-folks@SRI-AI.ARPA
Reply-To: *<emma>directory.txt
Dear CSLI-FOLKS:
We are assembling a CSLI-FOLKS DIRECTORY, which will include work
title, address and phone; ARPANet address; home address and phone
(optional).
We realize that we already have some information but wish to ensure
the accuracy of the directory by double checking, so please complete
all the non-optional entries on the form.
We hope this directory will be useful to you in communicating with
other CSLI folks, including those not on the NET. We appreciate
your response so that our directory can be as complete as possible.
As soon as the input is finished, copies will be available in the
lobby at Ventura. For questions, contact (Emma@sri-ai) or Emma Pease
at (415) 497-0939. Thanks for your cooperation.
1) NAME: 2) NICKNAME(optional):
3) NET ADD: 4) ALT NET ADD:
5) TITLE: 6) ALT TITLE:
7) WORK ADD: 8) ALT WORK ADD:
9) WORK PH: 10) ALT WORK PH:
12) HOME ADD(optional):
13) HOME PH(optional):
-------
∂09-Dec-83 1159 KJB@SRI-AI.ARPA New members of CSLI
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Dec 83 11:52:11 PST
Date: Fri 9 Dec 83 11:48:52-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: New members of CSLI
To: csli-folks@SRI-AI.ARPA
Dear all,
As you all know, one of our main concerns this first year is
to develop a plan to build up area C. Following discussions with Rod
Burstall and members of the Executive Committee, we have invited Joe
Goguen and Jose (Pepe) Meseguer to join CSLI, to help with the
research and planning in Area C. I am delighted to say that they have
agreed. They are first-rate researchers in area C and are
enthusiastic about what we are trying to do at CSLI. I hope that they
will be able to come to the party next week, so that those of you who
do not know them can meet them, and learn of their intersts, and to
share yours with them. They have been added to the csli-folks list,
and have net addresses lastname@sri-ai.
Jon
-------
∂09-Dec-83 1205 KJB@SRI-AI.ARPA Party
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Dec 83 12:05:18 PST
Date: Fri 9 Dec 83 11:58:37-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Party
To: csli-folks@SRI-AI.ARPA
Dear all,
I hope you will all be able to come to the part next week, and
that you will all feel free to introduce yourselves to each other,
staff and researchers. It is important that the staff of CSLI,
whether housed at SRI, Ventura, or PARC, all get to know each other
and the other members of CSLI.
Ivan has agreed to arrange the entertainment for the party,
and Emma will be asking others to help with things. If the food is
as great as it was on Sept 1, and with Ivan's entertainment, it should
be fun.
I have taken the liberty of inviting a number of people
outside the Center, like the local members of the Advisory Panel, the
dept chairpeople of the relevant SU depts, John Brown, Charlie SMith,
and the like. Only nice comfortible types, though.
I hope my future daughter does not choose that night to
arrive.
Jon
-------
∂09-Dec-83 1316 ULLMAN@SU-SCORE.ARPA CIS building
Received: from SU-SCORE by SU-AI with TCP/SMTP; 9 Dec 83 13:16:09 PST
Date: Fri 9 Dec 83 13:12:57-PST
From: Jeffrey D. Ullman <ULLMAN@SU-SCORE.ARPA>
Subject: CIS building
To: faculty@SU-SCORE.ARPA
Plans for filling the CIS building are being made.
Does anyone wish to move VLSI-related activities to that building?
Please let me know what, in sq. ft., you have in mind.
-------
∂09-Dec-83 1538 RIGGS@SRI-AI.ARPA Talk by Stalnacker
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Dec 83 15:38:03 PST
Date: Fri 9 Dec 83 15:33:53-PST
From: RIGGS@SRI-AI.ARPA
Subject: Talk by Stalnacker
To: CSLI-Friends@SRI-AI.ARPA
Monday, December 12 at 12:00 noon Bob Stalnaker will give a talk
entitled "Problems with 'De Re' Belief. This will be a meeting
of a B-2 project. Everyone interested is invited to attend this talk
being held in the Ventura Conference Room.
Sandy for John Perry
-------
∂10-Dec-83 0822 KJB@SRI-AI.ARPA Sigh
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Dec 83 08:22:38 PST
Date: Sat 10 Dec 83 08:16:32-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Sigh
To: csli-folks@SRI-AI.ARPA
I need to prepare an end of the year letter to SDF telling them how things
are going. Would committee chairperson send me a short report on the
progress of your committee so far for me to include in my report, and would
those of you who are on committees but not the chairperson nag your chair
to make sure they do this? I will make up the report during the last week
of the year, but some of you are leaving town this week for the rest of the
year, so please send me this report before you leave. Make the report honest.
I will exercise editorial judgment in deciding how much to include. Thanks.
Jon
-------
∂10-Dec-83 1902 LAWS@SRI-AI.ARPA AIList Digest V1 #111
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Dec 83 19:01:54 PST
Date: Sat 10 Dec 1983 14:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #111
To: AIList@SRI-AI
AIList Digest Saturday, 10 Dec 1983 Volume 1 : Issue 111
Today's Topics:
Call for Papers - Special Issue of AJCL,
Linguistics - Phrasal Analysis Paper,
Intelligence - Purpose of Definition,
Expert Systems - Complexity,
Environments - Need for Sharable Software,
Jargon - Mental States,
Administrivia - Spinoff Suggestion,
Knowledge Representation - Request for Discussion
----------------------------------------------------------------------
Date: Thu 8 Dec 83 08:55:34-PST
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: Special Issue of AJCL
American Journal of Computational Linguistics
The American Journal of Computational Linguistics is planning a
special issue devoted to the Mathematical Properties of Linguistic
Theories. Papers are hereby requested on the generative capacity of
various syntactic formalisms as well as the computational complexity
of their related recognition and parsing algorithms. Articles on the
significance (and the conditions for the significance) of such results
are also welcome. All papers will be subjected to the normal
refereeing process and must be accepted by the Editor-in-Chief, James
Allen. In order to allow for publication in Fall 1984, five copies of
each paper should be sent by March 31, 1984 to the special issue
editor,
C. Raymond Perrault Arpanet: Rperrault@sri-ai
SRI International Telephone: (415) 859-6470
EK268
Menlo Park, CA 94025.
Indication of intention to submit would also be appreciated.
------------------------------
Date: 8 Dec 1983 1347-PST
From: MEYERS.UCI-20A@Rand-Relay
Subject: phrasal analysis paper
Over a month ago, I announced that I'd be submitting
a paper on phrasal analysis to COLING. I apologize
to all those who asked for a copy for not getting it
to them yet. COLING acceptance date is April 2,
so this may be the earliest date at which I'll be releasing
papers. Please do not lose heart!
Some preview of the material might interest AILIST readers:
The paper is entitled "Conceptual Grammar", and discusses
a grammar that uses syntactic and 'semantic' nonterminals.
Very specific and very general information about language
can be represented in the grammar rules. The grammar is
organized into explicit levels of abstraction.
The emphasis of the work is pragmatic, but I believe it
represents a new and useful approach to Linguistics as
well.
Conceptual Grammar can be viewed as a systematization of the
knowledge base of systems such as PHRAN (Wilensky and Arens,
at UC Berkeley). Another motivation for a conceptual grammar is
the lack of progress in language understanding using syntax-based
approaches. A third motivation is the lack of intuitive appeal
of existing grammars -- existing grammars offer no help in manipulating
concepts the way humans might. Conceptual Grammar is
an 'open' grammar at all levels of abstraction. It is meant
to handle special cases, exceptions to general rules, idioms, etc.
Papers on the implemented system, called VOX, will follow
in the near future. VOX analyzes messages in the Navy domain.
(However, the approach to English is completely general).
If anyone is interested, I can elaborate, though it is
hard to discuss such work in this forum. Requests
for papers (and for abstracts of UCI AI Project papers)
can be sent by computer mail, or 'snail-mail' to:
Amnon Meyers
AI Project
Department of Computer Science
University of California
Irvine, CA 92717
PS: A paper has already been sent to CSCSI. The papers emphasize
different aspects of Conceptual Grammar. A paper on VOX as
an implementation of Conceptual Grammar is planned for AAAI.
------------------------------
Date: 2 Dec 83 7:57:46-PST (Fri)
From: ihnp4!houxm!hou2g!stekas @ Ucb-Vax
Subject: Re: Rational Psych (and science)
Article-I.D.: hou2g.121
It is true that psychology is not a "science" in the way a physicist
defines "science". Of course, a physicist would be likely to bend
his definition of "science" to exclude psychology.
The situation is very much the same as defining "intelligence".
Social "scientists" keep tightening their definition of intelligence
as required to exclude anything which isn't a human being. While
AI people now argue over what intelligence is, when an artificial system
is built with the mental ability of a mouse (the biological variety!)
in no time all definitions of intelligence will be bent to include it.
The real significance of a definition is that it clarifies the *direction*
in which things are headed. Defining "intelligence" in terms of
adaptability and self-consciousness are evidence of a healthy direction
to AI.
Jim
------------------------------
Date: Fri 9 Dec 83 16:08:53-PST
From: Peter Karp <KARP@SUMEX-AIM.ARPA>
Subject: Biologists, physicists, and report generating programs
I'd like to ask Mr. Fostel how biologists "do research in ways that seem
different than physicists". It would be pretty exciting to find that
one or both of these two groups do science in a way that is not part of
standard scientific method.
He also makes the following claim:
... the complexity of the commercial software I mentionned is
MUCH greater than the usual problem attacked by AI people...
With the example that:
... designing the reports and generating them for a large complex
system (and doing a good job) may take a large fraction of the total
time, yet such reporting is not usually done in the AI world.
This claim is rather absurd. While I will not claim that deciding on
the best way to present a large amount of data is a trivial task, the
point is that report generating programs have no knowledge about data
presentation strategies. People who do have such knowledge spend hours
and hours deciding on a good scheme and then HARD CODING such a scheme
into a program. Surely one would not claim that a program consisting
soley of a set of WRITELN (or insert your favorite output keyword)
statements has any complexity at all, much less intelligence or
knowledge? Just because a program takes a long time to write doesn't
mean it has any complexity, in terms of control structures or data
structures. And in fact this example is a perfect proof of this
conjecture.
------------------------------
Date: 2 Dec 83 15:27:43-PST (Fri)
From: sri-unix!hplabs!hpda!fortune!amd70!decwrl!decvax!duke!mcnc!shebs
@utah-cs.UUCP (Stanley Shebs)
Subject: Re: RE: Expert Systems
Article-I.D.: utah-cs.2279
A large data-processing application is not an expert system because
it cannot explain its action, nor is the knowledge represented in an
adequate fashion. A "true" expert system would *not* consist of
algorithms as such. It would consist of facts and heuristics organized
in a fashion to permit some (relatively uninteresting) algorithmic
interpreter to generate interesting and useful behavior. Production
systems are a good example. The interpreter is fixed - it just selects
rules and fires them. The expert system itself is a collection of rules,
each of which represents a small piece of knowledge about the domain.
This is of course an idealization - many "expert systems" have a large
procedural component. Sometimes the existence of that component can
even be justified...
stan shebs
utah-cs!shebs
------------------------------
Date: Wed, 7 Dec 1983 05:39 EST
From: LEVITT%MIT-OZ@MIT-MC.ARPA
Subject: What makes AI crawl
From: Seth Goldman <seth@UCLA-CS>
Subject: Programming environments are fine, but...
What are all of you doing with your nifty, adequate, and/or brain-damaged
computing environments? Also, if we're going to discuss environments, it
would be more productive I think to give concrete examples...
[Sounds good to me. It would be interesting to know
whether progress in AI is currently held back by conceptual
problems or just by the programming effort of building
large and user-friendly systems. -- KIL]
It's clear to me that, despite a relative paucity of new "conceptual"
AI ideas, AI is being held back entirely by the latter "programming
effort" problem, AND by the failure of senior AI researchers to
recognize this and address it directly. The problem is regressive
since programming problems are SO hard, the senior faculty typically
give up programming altogether and lose touch with the problems.
Nobody seems to realize how close we would be to practical AI, if just
a handful of the important systems of the past were maintained and
extended, and if the most powerful techniques were routinely applied
to new applications - if an engineered system with an ongoing,
expanding knowledge base were developed. Students looking for theses
and "turf" are reluctant to engineer anything familiar-looking. But
there's every indication that the proven techniques of the 60's/early
70's could become the core of a very smart system with lots of
overlapping knowledge in very different subjects, opening up much more
interesting research areas - IF the whole thing didn't have to be
(re)programmed from scratch. AI is easy now, showing clear signs of
diminishing returns, CS/software engineering are hard.
I have been developing systems for the kinds of analogy problems music
improvisors and listeners solve when they use "common sense"
descriptions of what they do/hear, and of learning by ear. I have
needed basic automatic constraint satisfaction systems
(Sutherland'63), extensions for dependency-directed backtracking
(Sussman'77), and example comparison/extension algorithms
(Winston'71), to name a few. I had to implement everything myself.
When I arrived at MIT AI there were at least 3 OTHER AI STUDENTS
working on similar constraint propagator/backtrackers, each sweating
out his version for a thesis critical path, resulting in a draft
system too poorly engineered and documented for any of the other
students to use. It was idiotic. In a sense we wasted most of our
programming time, and would have been better off ruminating about
unfamiliar theories like some of the faculty. Theories are easy (for
me, anyway). Software engineering is hard. If each of the 3 ancient
discoveries above was an available module, AI researchers could have
theories AND working programs, a fine show.
------------------------------
Date: Thu, 8 Dec 83 11:56 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states of machines
I have no problem with using anthropomorphic (or "mental") descriptions of
systems as a heuristic for dealing with difficult problems. One such
trick I especially approve of is Seymour Papert's "body syntonicity"
technique. The basic idea is to get young children to understand the
interaction of mathematical concepts by getting them to enter into a
turtle world and become an active participant in it, and to use this
perspective for understanding the construction of geometric structures.
What I am objecting to is that I sense that John McCarthy is implying
something more in his article: that human mental states are no different
than the very complex systems that we sometimes use mental descriptions
as a shorthand to describe.
I would refer to Ilya Prigogine's 1976 Nobel Prize winning work in chemistry on
"Dissapative Structures" to illustrate the foolishness of McCarthy's
claim.
Dissapative structures can be explained to some extent to non-chemists by means
of the termite analogy. Termites construct large rich and complex domiciles.
These structures sometimes are six feet tall and are filled with complex
arches and domed structures (it took human architects many thousands of
years to come up with these concepts). Yet if one watches termites at
the lowest "mechanistic" level (one termite at a time), all one sees
is a termite randomly placing drops of sticky wood pulp in random spots.
What Prigogine noted was that there are parallels in chemistry. Where random
underlying processes spontaneously give rise to complex and rich ordered
structures at higher levels.
If I accept McCarthy's argument that complex systems based on finite state
automata exhibit mental characteristics, then I must also hold that termite
colonies have mental characteristics, Douglas Hofstadter's Aunt Hillary also
has mental characteristics, and that certain colloidal suspensions and
amorphous crystals have mental characteristics.
- Steven Gutfreund
Gutfreund.umass@csnet-relay
[I, for one, have no difficulty with assigning mental "characteristics"
to inanimate systems. If a computer can be "intelligent", and thus
presumably have mental characteristics, why not other artificial
systems? I admit that this is Humpty-Dumpty semantics, but the
important point to me is the overall I/O behavior of the system.
If that behavior depends on a set of (discrete or continuous) internal
states, I am just as happy calling them "mental" states as calling
them anything else. To reserve the term mental for beings having
volition, or souls, or intelligence, or neurons, or any other
intuitive characteristic seems just as arbitrary to me. I presume
that "mental" is intended to contrast with "physical", but I side with
those seeing a physical basis to all mental phenomena. Philosophers
worry over the distinction, but all that matters to me is the
behavior of the system when I interface with it. -- KIL]
------------------------------
Date: 5 Dec 83 12:08:31-PST (Mon)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxnn!pyuxmm!cbdkc1!cbosgd!osu-db
s!lum @ Ucb-Vax
Subject: Re: defining AI, AI research methodology, jargon in AI
Article-I.D.: osu-dbs.426
Perhaps Dyer is right. Perhaps it would be a good thing to split net.ai/AIList
into two groups, net.ai and net.ai.d, ala net.jokes and net.jokes.d. In one
the AI researchers could discuss actual AI problems, and in the other, philo-
sophers could discuss the social ramifications of AI, etc. Take your pick.
Lum Johnson (cbosgd!osu-dbs!lum)
------------------------------
Date: 7 Dec 83 8:27:08-PST (Wed)
From: decvax!tektronix!tekcad!franka @ Ucb-Vax
Subject: New Topic (technical) - (nf)
Article-I.D.: tekcad.155
OK, some of you have expressed a dislike for "non-technical, philo-
sophical, etc." discussions on this newsgroup. So for those of you who are
tired of this, I pose a technical question for you to talk about:
What is your favorite method of representing knowlege in a KBS?
Do you depend on frames, atoms of data jumbled together randomly, or something
in between? Do you have any packages (for public consumption which run on
machines that most of us have access to) that aid people in setting up knowlege
bases?
I think that this should keep this newsgroup talking at least partially
technically for a while. No need to thank me. I just view it as a public ser-
vice.
From the truly menacing,
/- -\ but usually underestimated,
<-> Frank Adrian
(tektronix!tekcad!franka)
------------------------------
End of AIList Digest
********************
∂12-Dec-83 0912 RIGGS@SRI-AI.ARPA GARDENFORS TALK CANCELLED
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Dec 83 09:12:36 PST
Date: Mon 12 Dec 83 09:09:38-PST
From: RIGGS@SRI-AI.ARPA
Subject: GARDENFORS TALK CANCELLED
To: CSLI-FRIENDS@SRI-AI.ARPA
Due to a family emergency, Peter Gardenfors returned to Sweden
last weekend. His talk due to be held at 2:00 p.m. in Ventura
Conference Room is cancelled.
He sends his warmest regards to everyone with CSLI.
-------
∂12-Dec-83 1126 TAJNAI@SU-SCORE.ARPA Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Dec 83 11:26:18 PST
Date: Mon 12 Dec 83 11:12:41-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: Bell Nominations
To: faculty@SU-SCORE.ARPA
cc: jf@SU-SCORE.ARPA
Some of the students nominated were ineligible because of citizenship.
Some already had NSF fellowships. The following are the students
nominated. Please send comments and/or votes as soon as possible.
FIRST YEAR:
Keith Hall -- nominated by Tom Binford
John Lamping -- nominated by Forest Baskett and seconded by John Hennessy
Kim McCall -- nominated by Forest Baskett and seconded by John Hennessy
SECOND YEAR +:
Linda DeMichiel -- nominated by Gio Wiederhold (Linda has been a Xerox
Honors Coop student, but is now full time)
THIRD YEAR STUDENTS:
David Chelberg -- nominated by Tom Binford. passed comp and qual
David Foulser -- nominated by Joe Oliger. passed comp and qual.
Tim Mann -- nominated by David Cheriton. passed comp, conditional on qual.
FOURTH YEAR STUDENT:
Frank Yellin -- nominated by Zohar Manna. filed G81
The Bell Fellowship is a 4-year fellowship. Last year they gave the award
to Marianne Winslett. Each year they will add an additional award. I think
it advisable for the Department to nominate the first and second year
students. More advanced students should be nominated for the IBM.
Please respond.
Carolyn
-------
∂12-Dec-83 1135 TAJNAI@SU-SCORE.ARPA Re: Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Dec 83 11:35:27 PST
Return-Path: <reid@Glacier>
Received: from Glacier by SU-SCORE.ARPA with TCP; Mon 12 Dec 83 11:33:09-PST
Date: Monday, 12 December 1983 11:31:40-PST
To: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Cc: jf@SU-SCORE.ARPA, reid@Glacier
Subject: Re: Bell Nominations
In-Reply-To: Your message of Mon 12 Dec 83 11:12:41-PST.
From: Brian Reid <reid@Glacier>
ReSent-date: Mon 12 Dec 83 11:34:50-PST
ReSent-from: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
ReSent-to: faculty@SU-SCORE.ARPA
I emphatically second the nomination of Keith Hall. He is fantastic!
Note that although Keith Hall is a first-year student he has passed the
Comprehensive. I believe, by the way, that he got the highest score on
the recent comp.
From information in his admissions folder John Lamping seems to walk on
water, and I offer a second of John Lamping based on his performance
before he came to Stanford. I know nothing about what he has done since
coming here.
Brian
∂12-Dec-83 1208 RIGGS@SRI-AI.ARPA GARDENFORS TALK IS UNCANCELLED
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Dec 83 12:08:23 PST
Date: Mon 12 Dec 83 12:03:16-PST
From: RIGGS@SRI-AI.ARPA
Subject: GARDENFORS TALK IS UNCANCELLED
To: CSLI-friends@SRI-AI.ARPA
Peter Gardenfors is still here and will be speaking as
planned today in the Ventura Conference Room. Due to a
change of plans he will be speaking at 2:00 p.m. today.
I am very sorry for the confusion and miscommunication.
Sandy
-------
∂12-Dec-83 1216 TAJNAI@SU-SCORE.ARPA Re: Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Dec 83 12:16:23 PST
Return-Path: <jlh@Shasta>
Received: from Shasta by SU-SCORE.ARPA with TCP; Mon 12 Dec 83 12:14:07-PST
Date: Monday, 12 Dec 1983 12:13-PST
To: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: Re: Bell Nominations
In-Reply-To: Your message of Mon 12 Dec 83 11:12:41-PST.
From: John Hennessy <jlh@Shasta>
ReSent-date: Mon 12 Dec 83 12:15:12-PST
ReSent-from: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
ReSent-to: faculty@SU-SCORE.ARPA
I give my strongest support to Lamping; he is the best of some 100+
students in 282. I give McCall a strong second as well. We should
probably give them a couple of choices.
∂12-Dec-83 1422 TAJNAI@SU-SCORE.ARPA
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Dec 83 14:22:47 PST
Return-Path: <TOB@SU-AI>
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 12 Dec 83 14:16:21-PST
Date: 12 Dec 83 1415 PST
From: Tom Binford <TOB@SU-AI>
To: tajnai@SU-SCORE
ReSent-date: Mon 12 Dec 83 14:17:00-PST
ReSent-from: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
ReSent-to: faculty@SU-SCORE.ARPA
Carolyn
Brian Reid had a strong second for Keith Hall
Tom
∂12-Dec-83 1405 TAJNAI@SU-SCORE.ARPA Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Dec 83 14:05:21 PST
Return-Path: <TW@SU-AI>
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 12 Dec 83 13:49:11-PST
Date: 12 Dec 83 1348 PST
∂12-Dec-83 1405 TAJNAI@SU-SCORE.ARPA Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Dec 83 14:05:21 PST
Return-Path: <TW@SU-AI>
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 12 Dec 83 13:49:11-PST
Date: 12 Dec 83 1348 PST
From: Terry Winograd <TW@SU-AI>
Subject: Bell Nominations
To: tajnai@SU-SCORE
ReSent-date: Mon 12 Dec 83 14:03:43-PST
ReSent-from: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
ReSent-to: faculty@SU-SCORE.ARPA
∂12-Dec-83 1126 TAJNAI@SU-SCORE.ARPA Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Dec 83 11:26:18 PST
Date: Mon 12 Dec 83 11:12:41-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: Bell Nominations
To: faculty@SU-SCORE.ARPA
cc: jf@SU-SCORE.ARPA
Some of the students nominated were ineligible because of citizenship.
Some already had NSF fellowships. The following are the students
nominated. Please send comments and/or votes as soon as possible.
FIRST YEAR:
Keith Hall -- nominated by Tom Binford
John Lamping -- nominated by Forest Baskett and seconded by John Hennessy
Kim McCall -- nominated by Forest Baskett and seconded by John Hennessy
SECOND YEAR +:
Linda DeMichiel -- nominated by Gio Wiederhold (Linda has been a Xerox
Honors Coop student, but is now full time)
THIRD YEAR STUDENTS:
David Chelberg -- nominated by Tom Binford. passed comp and qual
David Foulser -- nominated by Joe Oliger. passed comp and qual.
Tim Mann -- nominated by David Cheriton. passed comp, conditional on qual.
FOURTH YEAR STUDENT:
Frank Yellin -- nominated by Zohar Manna. filed G81
The Bell Fellowship is a 4-year fellowship. Last year they gave the award
to Marianne Winslett. Each year they will add an additional award. I think
it advisable for the Department to nominate the first and second year
students. More advanced students should be nominated for the IBM.
Please respond.
Carolyn
-------
I am in favor of Lamping and McCall -t
∂12-Dec-83 1706 TAJNAI@SU-SCORE.ARPA Update on Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Dec 83 17:06:18 PST
Date: Mon 12 Dec 83 17:05:21-PST
From: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Subject: Update on Bell Nominations
To: Faculty@SU-SCORE.ARPA
cc: JF@SU-SCORE.ARPA
My apologies, Jeff Naughton was not included in the list of nominations
made by Forest Baskett. I have updated the listing adding comments
made by faculty.
Some of the students nominated were ineligible because of citizenship.
Some already had NSF fellowships. The following are the students
nominated. Please send comments and/or votes as soon as possible.
FIRST YEAR:
KEITH HALL -- nominated by Tom Binford, seconded by Brian Reid
BKR: I emphatically second the nomination of Keith Hall. He is
fantastic. Note that although Keith Hall is a first-year
student he has passed the Comprehensive. I believe, by the way,
that he got the highest score on the recent comp.
JOHN LAMPING -- nominated by Forest Baskett and seconded by John Hennessy
FB: Student in CS311; 1 of 4 who stands out prominently above
the rest of a very large class of mostly first year students;
seems to have significant theoretical capabilities coupled with major
architectural and systems interests that would seem to make him a
good candidate for a Bell Fellowship.
JLH: I am particularly impressed by Lamping in classes. I give my strongest
support to Lamping; he is the best of some 100+ students in 282.
BKR: From information in his admissions folder John Lamping seems to walk on
water, and I offer a second of John Lamping based on his performance
before he came to Stanford. I know nothing about what he has done since
coming here.
KIM MCCALL -- nominated by Forest Baskett and seconded by John Hennessy
FB: Student in CS311; 1 of 4 who stands out prominently above
the rest of a very large class of mostly first year students;
seems to have significant theoretical capabilities coupled with major
architectural and systems interests that would seem to make him a
good candidate for a Bell Fellowship.
JLH: I am particularly impressed by McCall in classes and by his
potential research talent. I give McCall a strong second as well.
SECOND YEAR:
JEFF NAUGHTON -- nominated by Forest Baskett and seconded by John Hennessy
passed written Comp with High Pass
FB: Student in CS311 and he is 1 of 4 who stands out prominently above
the rest; seems to have significant theoretical capabilities coupled
with major architectural and systems interests that would seem to
make him a good candidate for a Bell Fellowship.
SECOND YEAR+:
LINDA DEMICHIEL -- nominated by Gio Wiederhold (Linda has been a Xerox
Honors Coop student, but is now full time)
THIRD YEAR STUDENTS:
David Chelberg -- nominated by Tom Binford. passed comp and qual
David Foulser -- nominated by Joe Oliger. passed comp and qual.
Tim Mann -- nominated by David Cheriton. passed comp, conditional on qual.
FOURTH YEAR STUDENT:
Frank Yellin -- nominated by Zohar Manna. filed G81 (Frank has an IBM
fellowship and it will probably be renewed for an additional year).
The Bell Fellowship is a 4-year fellowship. Last year they gave the award
to Marianne Winslett. Each year they will add an additional award. I think
it advisable for the Department to nominate the first and second year
students. More advanced students should be nominated for the IBM.
Please respond.
Carolyn
-------
∂13-Dec-83 1055 EMMA@SRI-AI.ARPA Holiday Potluck Party (reminder)
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Dec 83 10:55:15 PST
Date: Tue 13 Dec 83 10:52:44-PST
From: Emma Pease <EMMA@SRI-AI.ARPA>
Subject: Holiday Potluck Party (reminder)
To: csli-folks@SRI-AI.ARPA
HOLIDAY POTLUCK PARTY
Please remember to rsvp to *<emma>party.txt@sri-ai or Emma@sri-ai
if you haven't already. Also remember the party starts between 6 and
6:30 at 1610 Oak Creek Apartments (in the Oak Room). The Oak Creek
apartments are across Willow road from the Stanford Med. Center.
Please bring food to share. If you have any questions contact me.
See you on Thursday.
Yours
Emma
-------
∂13-Dec-83 1349 @SRI-AI.ARPA:desRivieres.PA@PARC-MAXC.ARPA CSLI Activities for Thursday Dec. 15th
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Dec 83 13:48:43 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Tue 13 Dec 83 13:40:43-PST
Date: Tue, 13 Dec 83 13:38 PST
From: desRivieres.PA@PARC-MAXC.ARPA
Subject: CSLI Activities for Thursday Dec. 15th
To: csli-friends@SRI-AI.ARPA
Reply-to: desRivieres.PA@PARC-MAXC.ARPA
CSLI SCHEDULE FOR THURSDAY, DECEMBER 15th, 1983
10:00 Research Seminar on Natural Language
Speaker: Martin Kay (Xerox-CSLI)
Title: "Unification"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Ray Perrault (SRI-CSLI)
Paper for discussion: "On time, tense, and aspect: an essay
in english metaphysics"
by Emmon Bach
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speakers: Fernando Pereira and Stuart Shieber (SRI-CSLI)
Title: "Feature Systems and their use in Grammars"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Richard Waldinger (SRI)
Title: "Deductive Program Synthesis Research"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available
in a lot located just off Campus Drive, across from the construction site.
∂13-Dec-83 2134 KJB@SRI-AI.ARPA Reorganization
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Dec 83 21:33:31 PST
Date: Tue 13 Dec 83 21:28:13-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Reorganization
To: CSLI-principals@SRI-AI.ARPA
Dear all,
For some time, and especially since the advisory panel's
visit, I have felt the need to fine tune the administrative
structure , or should I say streamline it. Here are some aims:
To give everyone, principals and assoicates, clear guideline
on resources of various sorts that they should be able to count on;
To streamline the authorization process so that, after an initial
set of decisions, people will only have to get authorization from Betsy;
eliminating the project and area managers need to be involved in
the majority of decisions;
To have one natural language area, not two; and finally
To come up with projects that fit more accurately what we are doing.
The plan is this, to have three areas,
I. Natural languages
II. Computer languages
III. Foundations
Each of these will have a "leader", who basically chairs process of
constructing the projects and making budget recommendations for the
area, but then, once things are set up, is only involved when tough
problems arise. Each area will have some subareas, with leaders who
are charged with making things happen.
Example: area III. Chair: Barwise
III.1 Common sense theories of the world, Moore (old D4)
III.2 Mind, action and reasoning, Rosenschein (old D2,D3)
III.3 Logic of information and computation , Barwise (old D1)
This area will have a certain budget for salaries, travel, workshops,
etc. The three leaders will act as a budget committee to recommend
expenditures to the executive committee. This will result in
commitments to people in area III in terms of salary, travel, etc,
and money to use for workshops. Once this is decided, it will only
require Betsy's fairly routine go-ahead to do things, not the cumbersome
machinery we now have in place. [There is no way to avoid the awkwardnesses
caused by the extra paperwork that the multi-institutional arrangement
causes, I fear.]
I have asked John Perry and Betsy to work together, consulting with all
the people working in area I, to come up with a recommendation for an
analogous organization of area I. I have made one suggestion, which would
split it into three subareas, but there may well be better solutions. I
urge each of you working full or part time on natural language to
talk to John and Betsy together in the very near future, to exchange
ideas on the best way to organize the natural language area. For that
matter, other principals should also feel free to make suggestions. The
more good ideas the better.
John and Betsy will make their recommendations public before they are
adopted, so that everyone will have a chance to react to them then, but
it will obviously be much more helpful to give them your ideas early.
I hope these changes will help iron out some of the minor irritations
that some of you have felt. Feel free to talk to me if you do not feel
your concerns are being addressed sufficiently.
Yours for an even friendlier center,
Jon
-------
∂14-Dec-83 1220 YAO@SU-SCORE.ARPA [C.S./Math Library <LIBRARY@SU-SCORE.ARPA>: Math/CS Library Hours]
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Dec 83 12:20:30 PST
Date: Wed 14 Dec 83 12:13:52-PST
From: Andrew Yao <YAO@SU-SCORE.ARPA>
Subject: [C.S./Math Library <LIBRARY@SU-SCORE.ARPA>: Math/CS Library Hours]
To: students@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA
Mail-From: LIBRARY created at 14-Dec-83 08:39:07
Date: Wed 14 Dec 83 08:39:07-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Math/CS Library Hours
To: yao@SU-SCORE.ARPA
cc: herriot@SU-SCORE.ARPA, cottle@SU-SCORE.ARPA, dek@SU-AI.ARPA
Andy,
Thank you for the opinions you gathered. With that information and the
statistics we kept for two Saturdays (evidently the faculty member we had
recorded as a user must have been Prof. Knuth), the library administration
decided that Math/CS should not cut back hours. In my two memos to the
library administration, I stressed the fact that our use continues to go
up and that use has shown to be primarily from graduate students and
faculty. On the 10th of December we had 20 graduate students in the
library between 10 and one. Because they decided not to close Math/CS
because of no keys, they expanded that to other science branches with
no keys. Therefore only those branches who give keys out will have
hours cut back. I will do a more formal memo to the library committee
explaining the decision. Please pass this information on to those you
requested an opinion and I would like to thank those who responded.
Harry
-------
-------
∂14-Dec-83 1401 @SU-SCORE.ARPA:CAB@SU-AI CSD Colloquium
Received: from SU-SCORE by SU-AI with TCP/SMTP; 14 Dec 83 14:00:53 PST
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 14 Dec 83 14:00:09-PST
Date: 14 Dec 83 1358 PST
From: Chuck Bigelow <CAB@SU-AI>
Subject: CSD Colloquium
To: faculty@SU-SCORE
I am organizing the Winter Quarter CSD Colloquium.
Any suggestions for speakers would be greatly appreciated.
Thanks.
--Chuck Bigelow
∂14-Dec-83 1459 LAWS@SRI-AI.ARPA AIList Digest V1 #112
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Dec 83 14:56:09 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
SRI-AI.ARPA mailer was obliged to send this message in 50-byte
individually Pushed segments because normal TCP stream transmission
timed out. This probably indicates a problem with the receiving TCP
or SMTP server. See your site's software support if you have any questions.
Date: Wed 14 Dec 1983 10:03-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #112
To: AIList@SRI-AI
AIList Digest Wednesday, 14 Dec 1983 Volume 1 : Issue 112
Today's Topics:
Memorial Fund - Carl Engelman,
Programming Languages - Lisp Productivity,
Expert Systems - System Size,
Scientific Method - Information Sciences,
Jargon - Mental States,
Perception - Culture and Vision,
Natural Language - Flame
----------------------------------------------------------------------
Date: Fri 9 Dec 83 12:58:53-PST
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: Carl Engelman Memorial Fund
CARL ENGELMAN MEMORIAL FUND
Carl Engelman, one of the pioneers in artificial intelligence
research, died of a heart attack at his home in Cambridge, Massachusetts,
on November 26, 1983. He was the creator of MATHLAB, a program developed
in the 1960s for the symbolic manipulation of mathematical expressions.
His objective there was to supply the scientist with an interactive
computational aid of a "more intimate and liberating nature" than anything
available before. Many of the ideas generated in the development of MATHLAB
have influenced the architecture of other systems for symbolic and algebraic
manipulation.
Carl graduated from the City College of New York and then earned
an MS Degree in Mathematics at the Massachusetts Institute of Technology.
During most of his professional career, he worked at The MITRE Corporation
in Bedford, Massachusetts. In 1973 he was on leave as a visiting professor
at the Institute of Information Science of the University of Turin, under a
grant from the Italian National Research Council.
At the time of his death Carl was an Associate Department Head
at MITRE, responsible for a number of research projects in artificial
intelligence. His best known recent work was KNOBS, a knowledge-based
system for interactive planning that was one of the first expert systems
applied productively to military problems. Originally developed for the
Air Force, KNOBS was then adapted for a Navy system and is currently being
used in two NASA applications. Other activities under his direction
included research on natural language understanding and automatic
programming.
Carl published a number of papers in journals and books and gave
presentations at many conferences. But he also illuminated every meeting
he attended with his incisive analysis and his keen wit. While he will
be remembered for his contributions to artificial intelligence, those
who knew him personally will deeply miss his warmth and humor, which he
generously shared with so many of us. Carl was particularly helpful to
people who had professional problems or faced career choices; his paternal
support, personal sponsorship, and private intervention made significant
differences for many of his colleagues.
Carl was a member of the American Association for Artificial
Intelligence, the American Institute of Aeronautics and Astronautics, the
American Mathematical Society, the Association for Computational
Linguistics, and the Association for Computing Machinery and its Special
Interest Group on Artificial Intelligence.
Contributions to the "Carl Engelman Memorial Fund" should be
sent to Judy Clapp at The MITRE Corporation, Bedford, Massachusetts 01730.
A decision will be made later on how those funds will be used.
------------------------------
Date: Tue, 13 Dec 83 09:49 PST
From: Kandt.pasa@PARC-MAXC.ARPA
Subject: re: lisp productivity question
Jonathan Slocum (University of Texas at Austin) has a large natural
language translation program (thousands of lines of Interlisp) that was
originally in Fortran. The compression that he got was 16.7:1. Also, I
once wrote a primitive production rule system in both Pascal and
Maclisp. The Pascal version was over 2000 lines of code and the Lisp
version was about 200 or so. The Pascal version also was not as
powerful as the Lisp version because of Pascal's strong data typing and
dynamic allocation scheme.
-- Kirk
------------------------------
Date: 9 Dec 83 19:30:46-PST (Fri)
From: decvax!cca!ima!inmet!bhyde @ Ucb-Vax
Subject: Re: RE: Expert Systems - (nf)
Article-I.D.: inmet.578
I would like to add to Gary's comments. There are also issues of
scale to be considered. Many of the systems built outside of AI
are orders of magnitude larger. I was amazed to read that at one
point the largest OPS production system, a computer game called Haunt,
had so very few rules in it. A compiler written using a rule based
approach would have 100 times as many rules. How big are the
AI systems that folks actually build?
The engineering component of large systems obscures the architectural
issues involved in their construction. I have heard it said that
AI isn't a field, it is a stage of the problem solving process.
It seems telling that the ARPA 5-year speech recognition project
was successful not with Hearsay ( I gather that after it was too late
it did manage to met the performance requirements ), but by Harpy. Now,
Harpy as very much like a signal processing program. The "beam search"
mechanisms it used are very different than the popular approachs of
the AI comunity. In the end it seems that it was an act of engineering,
little insight into the nature of knowledge gained.
The issues that caused AI and the rest of computing to split a few
decades ago seem almost quaint now. Allan Newell has a pleasing paper
about these. Only the importance of an interpreter based program
development enviroment seem to continue. Can you buy a work station
capable of sharing files with your 360 yet?
[...]
ben hyde
------------------------------
Date: 10 Dec 83 16:33:59-PST (Sat)
From: decvax!ittvax!dcdwest!sdcsvax!davidson @ Ucb-Vax
Subject: Information sciences vs. physical sciences
Article-I.D.: sdcsvax.84
I am responding to an article claiming that psychology and computer
science aren't sciences. I think that the author is seriously confused
by his prefered usage of the term ``science''. The sciences based on
mathematics, information processing, etc., which I will here call
information sciences, e.g., linguistics, computer science, information
science, cognitive science, psychology, operations research, etc., have
very different methods of operation from sciences based upon, for
example, physics. Since people often view physics as the prototypical
science, they become confused when they look at information sciences.
This is analogous to the confusion of the early grammarians who tried
to understand English from a background in Latin: They decided that
English was primitive and in need of fixing, and proceeded to create
Grammar schools in which we were all supposed to learn how to speak
our native language properly (i.e., with intrusions of latin grammar).
If someone wants to have a private definition of the word science to
include only some methods of operation, that's their privilege, as
long as they don't want to try to use words to communicate with other
human beings. But we shouldn't waste too much time definining terms,
when we could be exploring the nature and utility of the methodologies
used in the various disciplines. In that light, let me say something
about the methodologies of two of the disciplines as I understand and
practice them, respectively.
Physics: There is here the assumption of a simple underlying reality,
which we want to discover through elegant theorizing and experimenting.
Compared to other disciplines, e.g., experimental psychology, many of
the experimental tools are crude, e.g., the statistics used. A theoretical
psychologist would probably find the distance that often separates physical
theory from experiment to be enormous. This is perfectly alright, given
the (assumed) simple nature of underlying reality.
Computer Science: Although in any mathematically based science one
might say that one is discovering knowledge; in many ways, it makes
better sense in computer science to say that one is creating as much
as discovering. Someone will invent a new language, a new architecture,
or a new algorithm, and people will abandon older languages, architectures
and algorithms. A physicist would find this strange, because these objects
are no less valid for having been surpassed (the way an outdated physical
theory would be), but are simply no longer interesting.
Let me stop here, and solicit some input from people involved in other
disciplines. What are your methods of investigation? Are you interested
in creating theories about reality, or creating artificial or abstract
realities? What is your basis for calling your discipline a science,
or do you? Please do not waste any time saying that some other discipline
is not a science because it doesn't do things the way yours does!
-Greg
------------------------------
Date: Sun, 11 Dec 83 20:43 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states
Ken Laws in his little editorializing comment on my last note seems to
have completely missed the point. Whether FSA's can display mental
states is an argument I leave to others on this list. However, John
McCarthy's definition allows ant hills and colloidal suspensions to
have mental states.
------------------------------
Date: Sun, 11 Dec 1983 15:04:10 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications
Mgr.)
Subject: Culture and Vision
Several people have recently been bringing up the question of the
effects of culture on visual perception. This problem has been around
in anthropology, folkloristics, and (to some extent) in sociolinguistics
for a number of years. I've personally taken a number of graduate courses
that focussed on this very topic.
Individuals interested in this problem (or, more precisely, group of
problems) should look into the Society for the Anthropology of Visual
Communication (SAVICOM) and its journal. You'll find that the terminology
is often unfamiliar, but the concerns are similar. The society is based
at the University of Pennsylvania's Annenberg School of Communications,
and is formally linked with such relevant groups as the American Anthro-
pological Assn.
Folks who want more info, citations, etc. on this can also contact
me personally by netmail, as I'm not sure that this is sufficiently
relevant to take up too much of AI's space.
Dave Axler
(Axler.Upenn-1100@Rand-Relay)
[Extract from further correspondence with Dave:]
There is a thing called "Visual Anthropology", on the
other hand, which deals with the ways that visual tools such as film, video,
still photography, etc., can be used by the anthropologist. The SAVICOM
journal occasionally has articles dealing with the "meta" aspects of visual
anthropology, causing it, at such times, to be dealing with the anthropology
of visual anthropology (or, at least, the epistemology thereof...)
--Dave Axler
------------------------------
Date: Mon 12 Dec 83 21:16:43-PST
From: Martin Giles <MADAGIL@SU-SIERRA.ARPA>
Subject: A humanities view of computers and natural language
The following is a copy of an article on the Stanford Campus report,
7th December, 1983, in response to an article describing research at
Stanford. The University has just received a $21 million grant for
research in the fields of natural and computer languages.
Martin
[I have extracted a few relevant paragraphs from the following 13K-char
flame. Anyone wanting the full text can contact AIList-Request or FTP
it from <AILIST>COHN.TXT on SRI-AI. I will deleted it after a few weeks.
-- KIL]
Mail-From: J.JACKSON1 created at 10-Dec-83 10:29:54
Date: Sat 10 Dec 83 10:29:54-PST
From: Charlie Jackson <J.JACKSON1@LOTS-A>
Subject: F; (Gunning Fog Index 20.18); Cohn on Computer Language Study
To: bboard@LOTS-A
Following is a letter found in this week's Campus Report that proves
Humanities profs make as good flames as any CS hacker. Charlie
THE NATURE OF LANGUAGE IS ALREADY KNOWN WITHOUT COMPUTERS
Following is a response from Robert Greer Cohn, professor of French, to
the Nov. 30 Campus Report article on the study of computer and natural
language.
The ambitious program to investigate the nature of language in
connection with computers raises some far-reaching questions. If it is
to be a sort of Manhattan project, to outdo the Japanese in developing
machines that "think" and "communicate" in a sophisticated way, that is
one thing, and one may question how far a university should turn itself
towards such practical, essentially engineering, matters. If on the
other hand, they are serious about delving into the nature of languages
for the sake of disinterested truth, that is another pair of shoes.
Concerning the latter direction: no committee ever instituted
has made the kind of breakthrough individual genius alone can
accomplish. [...]
Do they want to know the nature of language? It is already
known.
The great breakthrough cam with Stephane Mallarme, who as Edmund
Wilson (and later Hugh Kenner) observed, was comparable only to Einstein
for revolutionary impact. He is responsible more than anyone, even
Nietzsche, for the 20th-century /episteme/, as most French first-rank
intellectuals agree (for example, Foucault, in "Les mots et les choses";
Sartre, in his preface to the "Poesies"' Roland Barthes who said in his
"Interview with Stephen Hearth," "All we do is repeat Mallarme";
Jakobson; Derrida; countless others).
In his "Notes" Mallarme saw the essence of language as
"fiction," which is to say it is based on paradox. In the terms of
Darwin, who describes it as "half art, half instinct," this means that
language, as related to all other reality (hypothetically nonlinguistic,
experimental) is "metaphorical" -- as we now say after Jakobson -- i.e.
above and below the horizontal line of on-going, spontaneous,
comparatively undammmed, life-flow or experience; later, as the medium
of whatever level of creativity, it bears this relation to the
conventional and rational real, sanity, sobriety, and so on.
In this sense Chomsky's view of language as innate and
determined is a half-truth and not very inspired. He would have been
better off if he had read and pondered, for example, Pascal, who three
centuries ago knew that "nature is itself only a first 'custom'"; or
Shakespeare: "The art itself is nature" (The Winter's Tale).
[...]
But we can't go into all the aspects of language here.
In terms of the project: since, on balance, it is unlikely the
effects will go the way of elite French thought on the subject, there
remains the probability that they will try to recast language, which is
at its best creatively free (as well as determined at its best by
organic totality, which gives it its ultimate meaning, coherence,
harmony), into the narrow mold of the computer, even at /its/ best.
[...]
COMPUTERS AND NEWSPEAK
In other words, there is no way to make a machine speak anything
other than newspeak, the language of /1984/. They may overcome that
flat dead robotic tone that our children enjoy -- by contrast, it gives
them the feeling that they are in command of life -- but the thought and
the style will be sprirtually inert. In that sense, the machines, or
the new language theories, will reflect their makers, who, in harnessing
themselves to a prefabricated goal, a program backed by a mental arms
race, will have been coopted and dehumanized. That flat (inner or
outer) tone is a direct result of cleaving to one-dimensionality, to the
dimension of the linear and "metonymic," the dimension of objectivity,
of technology and science, uninformed and uninspired by the creatively
free and whole-reflecting ("naive") vertical, or vibrant life itself.
That unidimensionality is visible in the immature personalities
of the zealots who push these programs: they are not much beyond
children in their Frankenstein eagerness to command the frightening
forces of the psyche, including sexuality, but more profoundly, life
itself, in its "existential" plenitude involving death.
People like that have their uses and can, with exemplary "tunnel
vision," get certain jobs done (like boring tunnels through miles of
rock). A group of them can come up with /engineering/ breakthroughs in
that sense, as in the case of the Manhattan project. But even that
follows the /creative/ breakthroughs of the Oppenheimers and Tellers and
Robert D. (the shepherd in France) and is rather pedestrian endeavor
under the management of some colonel.
When I tried to engage a leader of the project in discussion
about the nature of language, he refused, saying, "The humanities and
sciences are father apart than ever," clearly welcoming this
development. This is not only deplorable in itself; far worse,
according to the most accomplished mind on /their/ side of the fence in
this area; this man's widely-hailed thinking is doomed to a dead end,
because of its "unidimensionality!"
This is not the place to go into the whole saddening bent of
our times and the connection with totalitarianism, which is "integrated
systems" with a vengeance. But I doubt that this is what our founders
had in mind.
------------------------------
End of AIList Digest
********************
∂15-Dec-83 0249 @SU-SCORE.ARPA:ROD@SU-AI CSD Colloquium
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Dec 83 02:49:14 PST
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Thu 15 Dec 83 02:45:25-PST
Date: 15 Dec 83 0244 PST
From: Rod Brooks <ROD@SU-AI>
Subject: CSD Colloquium
To: faculty@SU-SCORE
Actually its me who is organizing Winter Quarter CSD Colloquium.
So send those cards and letters with suggested speakers to me.
Chuck Bigelow will appreciate this too.
Rod Brooks
∂15-Dec-83 0858 DKANERVA@SRI-AI.ARPA newsletter No. 13, December 15, 1983
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Dec 83 08:57:29 PST
Date: Thu 15 Dec 83 08:07:59-PST
From: DKANERVA@SRI-AI.ARPA
Subject: newsletter No. 13, December 15, 1983
To: csli-folks@SRI-AI.ARPA
CSLI Newsletter
* * *
December 15, 1983 Number 13
This is the end of the quarter and the last CSLI Newsletter for
1983. The next issue will appear on Thursday, January 5, 1984. Some
of the activities at CSLI will be resuming that day as well; for
example, Fernando Pereira will be leading the discussion at TINLunch.
Other activities will be announced as they are scheduled.
Happy New Year to us all!
* * * * * * *
SITE AND CONCEPT APPROVAL FOR NEW CSLI BUILDING
On December 12, the Stanford Board of Trustees gave site and
concept approval for a new CSLI building. John Perry had attended the
meeting and had told the trustees more about us and the plans for the
building. We still need to raise the funds for a building, but this
particular hurdle is behind us.
- Jon Barwise
* * * * * * *
NEW MEMBERS OF CSLI
Following discussions with Rod Burstall of the Advisory Panel and
with members of the Executive Committee, we have invited Joe Goguen
and Jose (Pepe) Meseguer to join CSLI to help with the research and
planning in Area C, theories of situated computer languages. I am
delighted to say that they have agreed. They are first-rate
researchers in Area C and are enthusiastic about what we are trying to
do at CSLI.
- Jon Barwise
* * * * * * *
CSLI COMPUTER FACILITY STAFF
Eric Ostrom, formerly Director of Computer Systems at the EE and
CS Departments at M.I.T., is the new Director of Computer Systems at
CSLI and has already negotiated the purchase of a DEC 2060 computer
for the general writing, computing, and communication needs of the
Center. He hopes to have the 2060 and the 15 Dandelions (already
here) installed within the next two months. He is now in the process
of hiring software and hardware personnel to maintain and develop the
computer systems of the Center.
Michele Leiser, Eric's assistant, joined CSLI on December 12.
Her recent duties at the Stanford Graduate School of Business allowed
frequent interaction with their DEC-20 and such software packages as
EMACS, MUSE, NCPCALC, and SYSTEM-1022. She will now be responsible
for coordinating Eric's schedule, capital equipment requisition and
maintenance, and staff training when our own DEC-20 arrives.
! Page 2
* * * * * * *
CSLI SCHEDULE FOR *THIS* THURSDAY, DECEMBER 15th, 1983
10:00 Research Seminar on Natural Language
Speaker: Martin Kay (Xerox-CSLI)
Title: "Unification"
Place: Redwood Hall, room G-19
12:00 TINLunch
Discussion leader: Ray Perrault (SRI-CSLI)
Paper for discussion: "On Time, Tense, and Aspect: An Essay
in English Metaphysics"
by Emmon Bach
Place: Ventura Hall
2:00 Research Seminar on Computer Languages
Speakers: Fernando Pereira and Stuart Shieber (SRI-CSLI)
Title: "Feature Systems and Their Use in Grammars"
Place: Redwood Hall, room G-19
3:30 Tea
Place: Ventura Hall
4:15 Colloquium
Speaker: Richard Waldinger (SRI)
Title: "Deductive Program Synthesis Research"
Place: Redwood Hall, room G-19
Note to visitors:
Redwood Hall is close to Ventura Hall on the Stanford Campus. It
can be reached from Campus Drive or Panama Street. From Campus Drive
follow the sign for Jordan Quad. $0.75 all-day parking is available
in a lot just off Campus Drive, across from the construction site.
* * * * * * *
MEETING OF PROJECT B2: SEMANTICS OF SENTENCES ABOUT MENTAL STATES
On Monday, December 12, at 12 noon, at the meeting of Project B2,
Bob Stalnaker gave a talk entitled "Problems with `De Re' Belief."
* * * * * * *
LET ALONE
Remember the excellent colloquium by Fillmore and Kay on "Let
Alone." You may have not have heard George Miller's example later: "I
can't understand `let' alone, let alone `let alone'." - Jon Barwise
! Page 3
* * * * * * *
TINLUNCH SCHEDULE
TINLunch will be held at 12 noon on Thursday, December 1, at
Ventura Hall, Stanford University. C. Raymond Perrault will lead the
discussion. The paper for discussion will be:
ON TIME, TENSE, AND ASPECT: AN ESSAY IN ENGLISH METAPHYSICS
by
Emmon Bach
TINLunch will be held each Thursday at Ventura Hall on the
Stanford University campus as a part of CSLI activities. Copies of
TINLunch papers will be at SRI in EJ251 and at Stanford University in
Ventura Hall.
NEXT WEEK: No TINLunch
December 22 & 29 Christmas Vacation
January 5 Fernando Pereira
January 12 Marcia Bush
January 19 John Perry
January 26 Stanley Peters
* * * * * * *
FRED DRETSKE TO SPEAK AT CSLI COLLOQUIUM
On Thursday, January 19, 1984, Fred Dretske, of the Philosophy
Department at the University of Wisconsin at Madison will speak at the
CSLI Colloquium at 4:15 p.m. The title of his talk will be "Aspects
of Cognitive Representation."
Dretske will also be speaking on Friday, January 20, at the
Philosophy Department Colloquium (3:15 p.m., Bldg. 90, room 92Q).
The title of that talk will be "Misrepresentation: How to Get Things
Wrong."
* * * * * * *
DAVID MCCARTY TO TALK AT STANFORD
David McCarty, of the Philosophy Department of Ohio State
University, will be at Stanford the week of January 23, 1984. He will
be giving talks Tuesday through Thursday of that week. Abstracts of
his talks and details of time and place will be provided later. These
talks will be of interest especially to people in Area C, computer
languages.
* * * * * * *
! Page 4
* * * * * * *
SPECIAL ISSUE OF AJCL
The American Journal of Computational Linguistics is planning a
special issue devoted to the mathematical properties of linguistic
theories. Papers are hereby requested on the generative capacity of
various syntactic formalisms as well as the computational complexity
of their related recognition and parsing algorithms. Articles on the
significance (and the conditions for the significance) of such results
are also welcome. All papers will be subjected to the normal
refereeing process and must be accepted by the editor-in-chief, James
Allen. Indication of intention to submit would also be appreciated.
To allow for publication in fall 1984, five copies of each paper
should be sent by March 31, 1984, to the special-issue editor:
C. Raymond Perrault Arpanet: Rperrault@sri-ai
SRI International, EK268 Telephone: (415) 859-6470
Menlo Park, CA 94025.
* * * * * * *
COMPUTER SCIENCE COLLOQUIUM NOTICE WEEK OF 12/12-12/16
12/13/1983 Special Knowledge Representation Seminar
Tuesday John Tsotsos
1:30 - 2:30 University of Toronto
M-112 Medical Center Knowledge Organization: Its Role in Representation,
Decision Making and Explanation Schemes for Expert
Systems
12/14/1983 Computers, Cognition and Education Seminar
Wednesday F. Reif
12:00-13:00 Physics/Education UC Berkeley
Cubberley 114 Prescriptive Studies of Human Cognitive Performance
and Instruction
12/14/1983 Talkware Seminar
Wednesday Everyone
2:15-4:00
380Y (Math Corner) Summary and Discussion
12/15/1983 AFLB
Thursday Andrei Broder
12:30 Stanford University
MJH352 No AFLB until January 12
12/15/1983 CSLI Colloquium
Thursday Richard Waldinger
4:15 p.m. AI Center SRI International
Redwood Hall Rm. Deductive Program Synthesis Research
G-19
12/16/1983 Database Research Seminar
Friday See you Friday the 13th of January
* * * * * * *
! Page 5
* * * * * * *
To: The Sloan Cognitive Science Group
Members of CSLI
All interested faculty and graduate students
From: Jon Barwise and Amos Tversky
Subject: The Study of Cognition and Information at Stanford
Date: December 12, 1983
The areas of cognitive and informational sciences at Stanford
have recently been greatly strengthened by the support of the Sloan
Foundation, leading to the establishment of the Sloan Cognitive
Science Program (SCSP), and by support of the System Development
Foundation, Program on Situated Language (Program SL), at the Center
for the Study of Language and Information (CSLI). It is hoped that
these developments will advance the study of cognition and information
and will provide opportunities for interdisciplinary contact among
computer scientists, linguists, logicians, philosophers and
psychologists in the Stanford area. The purpose of this memo is to
inform this community about SCSP, CSLI, and Program SL.
All these projects focus on the use of symbols to process, store,
and communicate information about the world. These symbolic processes
include language, perception, thought, and computation. The focus of
SCSP is the use of cognitive activities as a window on the human mind.
Complementing this approach, the focus of research in Program SL at
CSLI is on the use of language to communicate information about the
world.
I. The Sloan Cognitive Science Program
One goal of the program is the development of a program of study
in Cognitive Science. We plan to offer a special program of graduate
study leading to a field designation in cognitive science. The
program will be based on offerings in the departments of linguistics,
psychology, philosophy, and computer science. It will be supplemented
by special courses to be given by visitors. A few fellowships will be
made available to students who participate in the program.
The second goal of the program is to initiate and support
cognitive science activities at Stanford. We expect to bring to
Stanford visiting scholars and postdoctoral fellows who could
contribute to and benefit from the interaction with our faculty and
graduate students. In addition, the program would encourage seminars,
workshops, and conferences proposed by members of the community.
Although the program will not support individual research projects, it
may support special efforts, e.g., curriculum development or
activities that benefit many members of the community.
! Page 6
(Sloan memorandum, continued)
The program is administered by an executive committee including:
Amos Tversky (Chair), Terry Winograd (Computer Science), Ivan Sag
(Linguistics), John Perry (Philosophy), and Ellen Markman
(Psychology). The administrative secretary is Mary Ballard (Bldg.
420, Room 104, ext. 7-3996). Announcements and planned activities of
the Sloan Program will appear in the CSLI Newsletter (see below).
II. The Center for the Study of Language and Information
and Program SL
CSLI was founded early in 1983. It grew out of a long-standing
collaboration between scientists at research laboratories in the Palo
Alto area and the faculty and students of the Stanford groups
mentioned above, but also out of a need to provide an institutional
effort to further this work. CSLI is based at Ventura Hall, with
satellites at SRI, Xerox PARC, and Fairchild. As its first major
research program, CSLI is undertaking a study of situated language, by
which is meant language as used by active agents situated in the
world. Situated languages include both human languages and computer
languages. The major goals of Program SL are to develop theories of
communication, action, and reasoning adequate to understand such
situated language. Concomitant with that, the program is aimed at
developing the foundations for these theories, including an underlying
theory of information and the fundamentals of computation and action,
and a theory of inference and logic.
Another goal of Program SL is to develop a program of study in
this area to be integrated with the cognitive science curriculum. We
plan a two-year course, within the departments mentioned above, one on
language as a means of communication, the other on the foundations of
situated language. In addition, there are research seminars each
quarter on various aspects of human and computer languages. These
seminars are open to anyone interested in attending. CSLI has some
graduate support for students who participate in this program.
CSLI plans to bring outstanding visitors and postdoctoral fellows
to the area and will coordinate these activities with SCSP and the
relevant departments. CSLI also plans to have other research
programs, outside of Program SL, supported by other funding agencies.
In the long run, it will be able to provide an excellent
computational, physical, and editorial environment for such work.
Scholars interested in basing such research activities at CSLI should
contact the director, assistant director, or a member of the executive
committee.
CSLI is administered by a director, Jon Barwise, an assistant
director, Elizabeth Macken (room 24, Ventura Hall, ext. 7-1224), and
by an executive committee: Barbara Grosz, John Perry, Stanley Peters,
and Brian Smith.
A fuller description of Program SL, and copies of CSLI's weekly
newsletter, can be obtained from the Administrative Secretary, Pat
Wunderman, room 23, Ventura Hall, ext. 7-1131.
-------
∂15-Dec-83 0906 KJB@SRI-AI.ARPA Next quarter's schedule
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Dec 83 09:06:18 PST
Date: Thu 15 Dec 83 09:03:16-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Next quarter's schedule
To: csli-friends@SRI-AI.ARPA
Except for TINLunch, which will start on January 5, the regular
Thursday activities will begin with the start of the new quarter,
and so will begin on January 12, 1984. Happy Vacation to all.
-------
∂15-Dec-83 0912 KJB@SRI-AI.ARPA Holiday greetings to us from Bell Labs
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Dec 83 09:12:42 PST
Date: Thu 15 Dec 83 09:07:10-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Holiday greetings to us from Bell Labs
To: csli-folks@SRI-AI.ARPA
CSLI gopt its first greeting card yesterday, from all our friends
at Bell Labs. I can't read many of the names, so will post it on
the bulletin board.
We are going to get pictures of everyone for the bulletin board,
so don't be surprised if someone snaps your picture soon.
-------
∂15-Dec-83 2118 @SU-SCORE.ARPA:CMILLER@SUMEX-AIM.ARPA [Carole Miller <CMILLER@SUMEX-AIM.ARPA>: HPP OPEN HOUSE - 12/15]
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Dec 83 21:18:07 PST
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Thu 15 Dec 83 21:17:58-PST
Date: Thu 15 Dec 83 10:21:08-PST
From: Carole Miller <CMILLER@SUMEX-AIM.ARPA>
Subject: [Carole Miller <CMILLER@SUMEX-AIM.ARPA>: HPP OPEN HOUSE - 12/15]
To: FACULTY@SU-SCORE.ARPA, ADMIN@SU-SCORE.ARPA
Please accept my apologies for the short notice. I thought I'd sent
this message the other day, but Score did not oblige. Hope you can
make it. We'll look forward to seeing you this afternoon...Carole Miller
---------------
Date: Tue 13 Dec 83 15:53:00-PST
From: Carole Miller <CMILLER@SUMEX-AIM.ARPA>
Subject: HPP OPEN HOUSE - 12/15
To: HPP@SUMEX-AIM.ARPA, CSD-ADMINISTRATION@SU-SCORE.ARPA, SUMEX-STAFF@SUMEX-AIM.ARPA,
CSD-FACULTY@SU-SCORE.ARPA, ULLMAN@SU-SCORE.ARPA, SHORTLIFFE@SUMEX-AIM.ARPA,
VIAN@SUMEX-AIM.ARPA
****************************************
OPEN HOUSE
HEURISTIC PROGRAMMING PROJECT
Come Celebrate
the HPP Move to New Offices
and Share with Us the Spirit
of the Holiday Season
THURSDAY, DECEMBER 15TH
3-5 P.M.
701 WELCH ROAD, BUILDING C
****************************************
-------
-------
∂16-Dec-83 0810 @SU-SCORE.ARPA:uucp@Shasta Re: Update on Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Dec 83 08:10:20 PST
Received: from Shasta by SU-SCORE.ARPA with TCP; Fri 16 Dec 83 08:10:02-PST
Received: from decwrl by Shasta with UUCP; Fri, 16 Dec 83 08:06 PST
Date: 16 Dec 1983 0737-PST (Friday)
Sender: uucp@Shasta
From: decwrl!baskett (Forest Baskett) <decwrl!baskett@Shasta>
Subject: Re: Update on Bell Nominations
Message-Id: <8312161537.AA03721@DECWRL>
Received: by DECWRL (3.327/4.09) 16 Dec 83 07:37:50 PST (Fri)
To: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>
Cc: JF@SU-SCORE.ARPA, Faculty@SU-SCORE.ARPA
In-Reply-To: Your message of Mon 12 Dec 83 17:05:21-PST.
<8312130120.AA27373@DECWRL>
Since Jeff Naughton is currently a second year student, I propose we
take him off the nomination list.
Forest
∂16-Dec-83 0818 @SU-SCORE.ARPA:reid@Glacier Re: Update on Bell Nominations
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Dec 83 08:18:04 PST
Received: from Glacier by SU-SCORE.ARPA with TCP; Fri 16 Dec 83 08:17:54-PST
Date: Friday, 16 December 1983 08:17:15-PST
To: decwrl!baskett (Forest Baskett) <decwrl!baskett@Shasta>
Cc: Carolyn Tajnai <TAJNAI@SU-SCORE.ARPA>, JF@SU-SCORE.ARPA,
Faculty@SU-SCORE.ARPA
Subject: Re: Update on Bell Nominations
In-Reply-To: Your message of 16 Dec 1983 0737-PST (Friday).
<8312161537.AA03721@DECWRL>
From: Brian Reid <reid@Glacier>
Take second-year students off the list? I thought the only requirement
for the Bell fellowship was that the student would finish within 4
years. To be fair, Keith Hall and Kim McCall are also technically
2nd-year students, because both of them were in the MS program last
year.
Brian
∂16-Dec-83 0827 WUNDERMAN@SRI-AI.ARPA Friday morning staff meetings at Ventura
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Dec 83 08:27:16 PST
Date: Fri 16 Dec 83 08:24:42-PST
From: WUNDERMAN@SRI-AI.ARPA
Subject: Friday morning staff meetings at Ventura
To: CSLI-friends@SRI-AI.ARPA
A note to remind you that on Fri. mornings from 8:30-9:30 the Ventura
staff is in meeting and our phones are not answerable. If you have an
emergency, call the lobby phone and let it ring: 497-0628. Thanks for
your cooperation.
-------
∂16-Dec-83 1128 WILKINS@SRI-AI.ARPA Prof. Cohn's response to CSLI
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Dec 83 11:28:39 PST
Date: Fri 16 Dec 83 11:24:38-PST
From: Wilkins <WILKINS@SRI-AI.ARPA>
Subject: Prof. Cohn's response to CSLI
To: CSLI-FRIENDS@SRI-AI.ARPA
For those of you who haven't seen it, the complete text of Professor Cohn's
response to the CSLI project is in <wilkins>cohn and can be ftped from sri-ai.
The last half of the file contains numerouse comments from the SAIL bboard.
David
-------
∂16-Dec-83 1327 LAWS@SRI-AI.ARPA AIList Digest V1 #113
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Dec 83 13:27:18 PST
Date: Fri 16 Dec 1983 10:02-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #113
To: AIList@SRI-AI
AIList Digest Friday, 16 Dec 1983 Volume 1 : Issue 113
Today's Topics:
Alert - Temporal Representation & Fuzzy Reasoning
Programming Languages - Phrasal Analysis Paper,
Fifth Generation - Japanese and U.S. Views,
Seminars - Design Verification & Fault Diagnosis
----------------------------------------------------------------------
Date: Wed 14 Dec 83 11:21:47-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: CACM Alert - Temporal Representation & Fuzzy Reasoning
Two articles in the Nov. issue of CACM (just arrived) may be of
special interest to AI researchers:
"Maintaining Knowledge about Temporal Intervals," by James F. Allen
of the U. of Rochester, is about representation of temporal information
using only intervals -- no points. While this work does not lead to a
fully general temporal calculus, it goes well beyond state space and
date line systems and is more powerful and efficient than event chaining
representations. I can imagine that the approach could be generalized
to higher dimensions, e.g., for reasoning about the relationships of
image regions or objects in the 3-D world.
"Extended Boolean Information Retrieval," by Gerald Salton, Edward A. Fox,
and Harry Wu, presents a fuzzy logic or hierarchical inference method for
dealing with uncertainties when evaluating logical formulas. In a
formula such as ((A and B) or (B and C)), they present evidential
combining formulas that allow for:
* Uncertainty in the truth, reliability, or applicability of the
the primitive terms A and B;
* Differing importance of establishing the primitive term instances
(where the two B terms above could be weighted differently);
* Differing semantics of the logical connectives (where the two
"and" connectives above could be threshold units with different
thresholds).
The output of their formula evaluator is a numerical score. They use
this for ranking the pertinence of literature citations to a database
query, but it could also be used for evidential reasoning or for
evaluating possible worlds in a planning system. For the database
query system, they indicate a method for determining term weights
automatically from an inverted index of the database.
The weighting of the Boolean connectives is based on the infinite set
of Lp vector norms. The connectives and[INF] and or[INF] are the
ones of standard logic; and[1] and or[1] are equivalent and reduce
formula evaluation to a simple weighted summation; intermediate
connective norms correspond to "mostly" gates or weighted neural
logic models. The authors present both graphical illustrations and
logical theorems about these connectives.
-- Ken Laws
------------------------------
Date: 14 Dec 83 20:05:25-PST (Wed)
From: hplabs!hpda!fortune!phipps @ Ucb-Vax
Subject: Re: Phrasal Analysis Paper/Programming Languages Applications ?
Article-I.D.: fortune.1981
Am I way off base, or does this look as if the VOX project
would be of interest to programming languages (PL) researchers ?
It might be interesting to submit to the next
"Principles of Programming Languages" (POPL) conference, too.
As people turn from traditional programming languages
(is Ada really pointing the way of the future ? <shudder !>) to other ways
(query languages and outright natural language processing)
to obtain and manipulate information and codified knowledge,
I believe that AI and PL people will find more overlap in their ends,
although probably not their respective interests, approaches, and style.
This institutionalized mutual ignorance doesn't benefit either field.
One of these days, AI people and programming languages people
ought to heal their schism.
I'd certainly like to hear more of VOX, and would cheerfully accept delivery
of a copy of your paper (US Mail (mine): PO Box 2284, Santa Clara CA 95055).
My apologies for using the net for a reply, but he's unreachable
thru USENET, and I wanted to make a general point anyhow.
-- Clay Phipps
--
{allegra,amd70,cbosgd,dsd,floyd,harpo,hollywood,hpda,ihnp4,
magic,megatest,nsc,oliveb,sri-unix,twg,varian,VisiA,wdl1}
!fortune!phipps
------------------------------
Date: 12 Dec 83 15:29:10 PST (Monday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: New Generation computing: Japanese and U.S. views
[The following is a direct submission to AIList, not a reprint.
It has also appeared on the Stanford bboards, and has generated
considerable discussion there. I am distributing this and the
following two reprints because they raise legitimate questions
about the research funding channels available to AI workers. My
distribution of these particular messages should not be taken as
evidence of support for or against military research. -- KIL]
from Japan:
"It is necessary for each researcher in the New Generation Computer
technology field to work for world prosperity and the progress of
mankind.
"I think it is the responsibility of each researcher, engineer and
scientist in this field to ensure that KIPS [Knowledge Information
Processing System] is used for good, not harmful, purposes. It is also
necessary to investigate KIPS's influence on society concurrent with
KIPS's development."
--Tohru Moto-Oka, University of Tokyo, editor of the new journal "New
Generation Computing", in the journal's founding statement (Vol. 1, No.
1, 1983, p. 2)
and from the U.S.:
"If the new generation technology evolves as we now expect, there will
be unique new opportunities for military applications of computing. For
example, instead of fielding simple guided missiles or remotely piloted
vehicles, we might launch completely autonomous land, sea, and air
vehicles capable of complex, far-ranging reconnaissance and attack
misssions. The possibilities are quite startling, and suggest that new
generation computing could fundamentally change the nature of future
conflicts."
--Defense Advanced Research Projects Agency, "Strategic Computing:
New Generation Computing Technology: A Strategic Plan for its
Development and Application to Critical Problems in Defense," 28
October 1983, p. 1
------------------------------
Date: 13 Dec 83 18:18:23 PST (Tuesday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Re: New Generation computing: Japanese and U.S. views
[Reprinted from the SU-SCORE bboard.]
My juxtaposition of quotations is intended to demonstrate the difference
in priorities between the Japanese and U.S. "next generation" computer
research programs. Moto-Oka is a prime mover behind the Japanese
program, and DARPA's Robert Kahn is a prime mover behind the American
one. Thus I consider the quotations comparable.
To put it bluntly: the Japanese say they are developing this technology
to help solve human and social problems. The Americans say they are
developing this technology to find more efficient ways of killing
people.
The difference in intent is quite striking, and will undoubtedly produce
a "next-generation" repetition of an all too familiar syndrome. While
the U.S. pours yet more money and scientific talent into the military
sinkhole, the Japanese invest their monetary and human capital in
projects that will produce profitable industrial products.
Here are a couple more comparable quotes, both from IEEE Spectrum, Vol.
20, No. 11, November 1983:
"DARPA intends to apply the computers developed in this program to a
number of broad military applications...
"An example might be a pilot's assistant that can respond to spoken
commands by a pilot and carry them out without error, drawing upon
specific aircraft, sensor, and tactical knowledge stored in memory and
upon prodigious computer power. Such capability could free a pilot to
concentrate on tactics while the computer automatically activated
surveillance sensors, interpreted radar, optical, and electronic
intelligence, and prepared appropriate weapons systems to counter
hostile aircraft or missiles....
"Such systems may also help in military assessments on a battlefield,
simulating and predicting the consequences of various courses of
military action and interpreting signals acquired on the battlefield.
This information could be compiled and presented as sophisticated
graphics that would allow a commander and his staff to concentrate on
the larger strategic issues, rather than having to manage the enormous
data flow that will[!] characterize future battles."
--Robert S. Cooper and Robert E. Kahn, DARPA, page 53.
"Fifth generation computers systems are exptected to fulfill four
major roles: (1) enhancement of productivity in low-productivity areas,
such as nonstandardized operations in smaller industries; (2)
conservation of national resources and energy through optimal energy
conversion; (3) establishment of medical, educational, and other kinds
of support systems for solving complex social problems, such as the
transition to a society made up largely of the elderly; and (4)
fostering of international cooperation through the machine translation
of languages."
--Tohru Moto-Oka, University of Tokyo, page 46
Which end result would *you* rather see?
/Ron
------------------------------
Date: Tue 13 Dec 83 21:29:22-PST
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Comparable quotes
[Reprinted from the SU-SCORE bboard.]
The goals of an effort funded by the military will be different
than those of an effort aimed at trade dominance. Intel stayed out of
the DoD VHSIC program because the founder of Intel felt that concentrating
on fast, expensive circuits would be bad for business. He was right.
The VHSIC program is aimed at making a few hundred copies of an IC for
a few thousand each. Concentration on that kind of product will bankrupt
a semiconductor company.
We see the same thing in AI. There is getting to be a mini-industry
built around big expensive AI systems on big expensive computers. Nobody
is thinking of volume. This is a direct consequence of the funding source.
People think in terms of keeping the grants coming in, not selling a
million copies. If money came from something like MITI, there would be
pressure to push forward to a volume product just to find out if there
is real potential for the technology in the real world. Then there would
be thousands of people thinking about the problems in the field, not
just a few hundred.
This is divirging from the main thrust of the previous flame, but
think about this and reply. There is more here than another stab at the
big bad military.
------------------------------
Date: Tue 13 Dec 83 10:40:04-PST
From: Sumit Ghosh <GHOSH@SU-SIERRA.ARPA>
Subject: Ph.D. Oral Examination: Special Seminar
[Reprinted from the SU-SCORE bboard.]
ADA Techniques for Implementing a Rule-Based Generalised Design Verifier
Speaker: Sumit Ghosh
Ph.D. Oral Examination
Monday, 19th Dec '83. 3:30pm. AEL 109
This thesis describes a top-down, rule-based design verifier implemented in
the language ADA. During verification of a system design, a designer needs
several different kinds of simulation tools such as functional simulation,
timing verification, fault simulation etc. Often these tools are implemented
in different languages, different machines thereby making it difficult to
correlate results from different kinds of simulations. Also the system design
must be described in each of the different kinds of simulation, implying a
substantial overhead. The rule-based approach enables one to create different
kinds of simulations, within the same simulation environment, by linking
appropriate type of models with the system nucleus. This system also features
zooming whereby certain subsections of the system design (described at a high
level) can be expanded at a lower level, at run time, for a more detailed
simulation. The expansion process is recursive and should be extended down to
the circuit level. At the present implementation stage, zooming is extended to
gate level simulation. Since only those modules that show discrepancy (or
require detailed analysis) during simulation are simulated in details, the
zoom technique implies a substantial reduction in complexity and CPU time.
This thesis further contributes towards a functional deductive fault simulator
and a generalised timing verifier.
------------------------------
Date: Mon 12 Dec 83 12:46-EST
From: Philip E. Agre <AGRE%MIT-OZ@MIT-MC.ARPA>
Subject: Walter Hamscher at the AI Revolving Seminar
[Reprinted from the MIT-AI bboard.]
AI Revolving Seminar
Walter Hamscher
Diagnostic reasoning for digital devices with static storage elements
Wendesday 14 December 83 4PM
545 Tech Sq 8th floor playroom
We view diagnosis as a process of reasoning from anomalous observations to a
set of components whose failure could explain the observed misbehaviors. We
call these components "candidates." Diagnosing a misbehaving piece of
hardware can be viewed as a process of generating, discriminating among, and
refining these candidates. We wish to perform this diagnosis by using an
explicit representation of the hardware's structure and function.
Our candidate generation methodology is based on the notions of dependency
directed backtracking and local propagation of constraints. This
methodology works well for devices without storage elements such as
flipflops. This talk presents a representation for the temporal behavior of
digital devices which allows devices with storage elements to be treated
much the same as combinatorial devices for the purpose of candidate
generation.
However, the straightforward adaptation requires solutions to subproblems
that are severely underconstrained. This in turn leads to an overly
conservative and not terribly useful candidate generator. There exist
mechanism-oriented solutions such as value enumeration, propagation of
variables, and slices; we review these and then demonstrate what domain
knowledge can be used to motivate appropriate uses of those techniques.
Beyond this use of domain knowledge within the current representation, there
are alternative perspectives on the problem which offer some promise of
alleviating the lack of constraint.
------------------------------
End of AIList Digest
********************
∂18-Dec-83 1526 LAWS@SRI-AI.ARPA AIList Digest V1 #114
Received: from SRI-AI by SU-AI with TCP/SMTP; 18 Dec 83 15:23:50 PST
Date: Sun 18 Dec 1983 11:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #114
To: AIList@SRI-AI
AIList Digest Sunday, 18 Dec 1983 Volume 1 : Issue 114
Today's Topics:
Intelligence - Confounding with Culture,
Jargon - Mental States,
Scientific Method - Research Methodology
----------------------------------------------------------------------
Date: 13 Dec 83 10:34:03-PST (Tue)
From: hplabs!hpda!fortune!amd70!dual!onyx!bob @ Ucb-Vax
Subject: Re: Intelligence = culture
Article-I.D.: onyx.112
I'm surprised that there have been no references to culture in
all of these "what is intelligence?" debates...
The simple fact of the matter is, that "intelligence" means very
little outside of any specific cultural reference point. I am
not referring just to culturally-biased vs. non-culturally-biased
IQ tests, although that's a starting point.
Consider someone raised from infancy in the jungle (by monkeys,
for the sake of the argument). What signs of intelligence will
this person show? Don't expect them to invent fire or stone
axes; look how long it took us the first time around. The most
intelligent thing that person could do would be on par with what
we see chimpanzees doing in the wild today (e.g. using sticks to
get ants, etc).
What I'm driving at is that there are two kinds of "intelli-
gence"; there is "common sense and ingenuity" (monkeys, dolphins,
and a few people), and there is "cultural methodology" (people
only).
Cultural methodologies include all of those things that are
passed on to us as a "world-view", for instance the notion of
wearing clothes, making fire, using arithmetic to figure out how
many people X bags of grain will feed, what spices to use when
cooking, how to talk (!), all of these things were at one time a
brilliant conception in someones' mind. And it didn't catch on
the first time around. Probably not the second or third time
either. But eventually someone convinced other people to try his
idea, and it became part of that culture. And using that as a
context gives other people an opportunity to bootstrap even
further. One small step for a man, a giant leap for his culture.
When we think about intelligence and get impressed by how wonder-
ful it is, we are looking at its application in a world stuffed
to the gills with prior context that is indispensible to every-
thing we think about.
What this leaves us with is people trying to define and measure a
hybrid of common sense and culture without noticing that what
they are interested in is actually two different things, plus the
inter-relations between those things, so no wonder the issue
seems so murky.
For those who may be interested, general systems theory, general
semantics, and epistemology are some fascinating related sub-
jects.
Now let's see some letters about what "common sense" is in this
context, and about applying that common sense to (cultural) con-
texts. (How recursive!)
------------------------------
Date: Tue, 13 Dec 83 11:24 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states
I am very intriguied by Ferenando Pereira's last comment:
Sorry, you missed the point that JMC and then I were making. Prygogine's
work (which I know relatively well) has nothing to say about systems
which have to model in their internal states equivalence classes of
states of OTHER systems. It seems to me impossible to describe such
systems unless certain sets of states are labeled with things
like "believe(John,have(I,book))". That is, we start associating
classes of internal states to terms that include mentalistic
predicates.
I may be missing the point, since I am not sure what "model in their internal
states equivelence classes of states of OTHER systems" means. But I think
you are saying is that `reasoning systems' that encode in their state
information about the states of other systems (or their own) are not
coverered by Ilya Prygogine's work.
I think think you are engaging in a leap of faith here. What is the basis
for believing that any sort of encoding of the state of other systems is
going on here. I don't think even the philosophical guard phrase
`equivalence class' protects you in this case.
To continue in my role of sceptic: if you make claims that you are constructing
systems that model their internal state (or other systems' internal states)
[or even an equivalence class of those states]. I will make a claim that
my Linear Programming Model of an computer parts inventory is also
exhibiting `mental reasoning' since it is modeling the internal states
of that computer parts inventory.
This means that Prygogine's work is operative in the case of FSA based
`reasoning systems' since they can do no more modeling of the internal
state of another system than a colloidal suspension, or an inventory
control system built by an operations research person.
- Steven Gutfreund
Gutfreund.umass@csnet-relay
------------------------------
Date: Wed 14 Dec 83 17:46:06-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Mental states of machines
The only reason I have to believe that a system encodes in its states
classifications of the states of other systems is that the systems we
are talking about are ARTIFICIAL, and therefore this is part of our
design. Of course, you are free to say that down at the bottom our
system is just a finite-state machine, but that's about as helpful as
making the same statement about the computer on which I am typing this
message when discussing how to change its time-sharing resource
allocation algorithm.
Besides this issue of convenience, it may well be the case that
certain predicates on the states of other or the same system are
simply not representable within the system. One does not even need to
go as far as incompleteness results in logic: in a system which has
means to represent a single transitive relation (say, the immediate
accessibility relation for a maze), no logical combination can
represent the transitive closure (accessibility relation) [example due
to Bob Moore]. Yet the transitive closure is causally connected to the
initial relation in the sense that any change in the latter will lead
to a change in the former. It may well be the case (SPECULATION
WARNING!) that some of the "mental state" predicates have this
character, that is, they cannot be represented as predicates over
lower-level notions such as states.
-- Fernando Pereira
------------------------------
Date: 12 Dec 83 7:20:10-PST (Mon)
From: hplabs!hao!seismo!philabs!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Mental states of machines
Article-I.D.: dciem.548
Any discussion of the nature and value of mental states in either
humans of machines should include consideration of the ideas of
J.G.Taylor (no relation). In his "Behavioral Basis of Perception"
Yale University Press, 1962, he sets out mathematically a basis
for changes in perception/behaviour dependent on transitions into
different members of "sets" of states. These "sets" look very like
the mental states referenced in the earlier discussion, and may
be tractable in studies of machine behaviour. They also tie in
quite closely with the recent loose talk about "catastrophes" in
psychology, although they are much better specified than the analogists'
models. The book is not easy reading, but it is very worthwhile, and
I think the ideas still have a lot to offer, even after 20 years.
Incidentally, in view of the mathematical nature of the book, it
is interesting that Taylor was a clinical psychologist interested
initially in behaviour modification.
Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt
------------------------------
Date: 14 Dec 1983 1042-PST
From: HALL.UCI-20B@Rand-Relay
Subject: AI Methods
After listening in on the communications concerning definitions
of intelligence, AI methods, AI results, AI jargon, etc., I'd
like to suggest an alternate perspective on these issues. Rather
than quibbling over how AI "should be done," why not take a close
look at how things have been and are being done? This is more of
a social-historical viewpoint, admitting the possibility that
adherents of differing methodological orientations might well
"talk past each other" - hence the energetic argumentation over
issues of definition. In this spirit, I'd like to submit the
following for interested AILIST readers:
Toward a Taxonomy of Methodological
Perspectives in Artificial Intelligence Research
Rogers P. Hall
Dennis F. Kibler
TR 108
September 1983
Department of Information and Computer Science
University of California, Irvine
Irvine, CA 92717
Abstract
This paper is an attempt to explain the apparent confusion of
efforts in the field of artificial intelligence (AI) research in
terms of differences between underlying methodological perspectives
held by practicing researchers. A review of such perspectives
discussed in the existing literature will be presented, followed by
consideration of what a relatively specific and usable taxonomy of
differing research perspectives in AI might include. An argument
will be developed that researchers should make their methodological
orientations explicit when communicating research results, both as
an aid to comprehensibility for other practicing researchers and as
a step toward providing a coherent intellectual structure which can
be more easily assimilated by newcomers to the field.
The full report is available from UCI for a postage fee of $1.30.
Electronic communications are welcome:
HALL@UCI-20B
KIBLER@UCI-20B
------------------------------
Date: 15 Dec 1983 9:02-PST
From: fc%usc-cse%USC-ECL@MARYLAND
Subject: Re: AIList Digest V1 #112 - science
In my mind, science has always been the practice of using the
'scientific method' to learn. In any discipline, this is used to some
extent, but in a pure science it is used in its purest form. This
method seems to be founded in the following principles:
1 The observation of the world through experiments.
2 Attempted explanations in terms of testable hypotheses - they
must explain all known data, predict as yet unobserved results,
and be falsifiable.
3 The design and use of experiments to test predictions made by these
hypotheses in an attempt to falsify them.
4 The abandonment of falsified hypotheses and their replacement
with more accurate ones - GOTO 2.
Experimental psychology is indeed a science if viewed from this
perspective. So long as hypotheses are made and predictions tested with
some sort of experiment, the crudity of the statistics is similar to
the statistical models of physics used before it was advanced to its
current state. Computer science (or whatever you call it) is also a
science in the sense that our understanding of computers is based on
prediction and experimentation. Anyone that says you don't experiment
with a computer hasn't tried it.
The big question is whether mathematics is a science. I guess
it is, but somehow any system in which you only falsify or verify based
on the assumptions you made leaves me a bit concerned. Of course we are
context bound in any other science, and can't often see the forest for
the trees, but on the other hand, accidental discovery based on
experiments with results which are unpredictable under the current theory
is not really possible in a purely mathematical system.
History is probably not a science in the above sense because,
although there are hypotheses with possible falsification, there is
little chance of performing an experiment in the past. Archeological
findings may be thought of as an experiment of the past, but I think
this sort of experiment is of quite a different nature than those that
are performed in other areas I call science. Archeology by the way is
probably a science in the sense of my definition not because of the
ability to test hypotheses about the past through experimental
diggings, but because of its constant development and experimental
testing of theory in regards to the way nature changes things over time.
The ability to determine the type of wood burned in an ancient fire and
the year in which it was burned is based on the scientific process that
archeologists use.
Fred
------------------------------
Date: 13 Dec 83 15:13:26-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Information sciences vs. physical sciences
Article-I.D.: dciem.553
*** This response is routed to net.philosophy as well as the net.ai
where it came from. Responders might prefer to edit net.ai out of
the Newsgroups: line before posting.
I am responding to an article claiming that psychology and computer
science arn't sciences. I think that the author is seriously confused
by his prefered usage of the term ``science''.
I'm not sure, but I think the article referenced was mine. In any case,
it seems reasonable to clarify what I mean by "science", since I think
it is a reasonably common meaning. By the way, I do agree with most of
the article that started with this comment, that it is futile to
define words like "science" in a hard and fast fashion. All I want
here is to show where my original comment comes from.
"Science" has obviously a wide variety of meanings if you get too
careful about it, just as does almost any word in a natural language.
But most meanings of science carry some flavour of a method for
discovering something that was not known by a method that others can
repeat. It doesn't really matter whether that method is empirical,
theoretical, experimental, hypothetico-deductive, or whatever, provided
that the result was previously uncertain or not obvious, and that at
least some other people can reproduce it.
I argued that psychology wasn't a science mainly on the grounds that
it is very difficult, if not impossible, to reproduce the conditions
of an experiment on most topics that qualify as the central core of
what most people think of as psychology. Only the grossest aspects
can be reproduced, and only the grossest characterization of the
results can be stated in a way that others can verify. Neither do
theoretical approaches to psychology provide good prediction of
observable behaviour, except on a gross scale. For this reason, I
claimed that psychology was not a science.
Please note that in saying this, I intend in no way to downgrade the
work of practicing psychologists who are scientists. Peripheral
aspects, and gross descriptions are susceptible to attack by our
present methods, and I have been using those methods for 25 years
professionally. In a way it is science, but in another way it isn't
psychology. The professional use of the word "psychology" is not that
of general English. If you like to think what you do is science,
that's fine, but remember that the definition IS fuzzy. What matters
more is that you contribute to the world's well-being, rather than
what you call the way you do it.
--
Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt
------------------------------
Date: 14 Dec 83 20:01:52-PST (Wed)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: fortune.1978
I have to throw my two bits in:
The essence of science is "prediction". The missing steps in the classic
paradigm of hypothesis-experiment-analysis- presented above is
that "hypothesis" should be read "theory-prediction-"
That is, no matter how well the hypothesis explains the current data, it
can only be tested on data that has NOT YET BEEN TAKEN.
Any sufficiently overdetermined model can account for any given set of data
by tweaking the parameters. The trick is, once calculated, do those parameters
then predict as yet unmeasured data, WITHOUT CHANGING the parameters?
("Predict" means "within an reasonable/acceptable confidence interval
when tested with the appropriate statistical methods".)
Why am I throwing this back into "ai"? Because (for me) the true test
of whether "ai" has/will become a "science" is when it's theories/hypotheses
can successfully predict (c.f. above) the behaviour of existing "natural"
intelligences (whatever you mean by that, man/horse/porpoise/ant/...).
------------------------------
End of AIList Digest
********************
∂19-Dec-83 0912 KJB@SRI-AI.ARPA reminder
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Dec 83 09:12:49 PST
Date: Mon 19 Dec 83 09:09:15-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: reminder
To: csli-folks@SRI-AI.ARPA
Don't forget to send me a paragraph or two describing the activities
of any committee or project you are responsible for. So far only
the Postdoc and Visitor Committee and the Colloquium Committee have
responded. ON the other hand, I notice that a couple of people have
left town for the vacation without giving me anything, which will leave
their activities unrepresented, unless they have delegated it to someone
else without telling me.
By the way, the Newsletter has been going to, among other places, SDF.
Those projects that have not made much of an appearance in the newsletter
might want to take this opportunity to make it clear that things have been
going on in those projects .
Deadline: 8 a.m. Dec 26
Thanks, Jon
-------
∂19-Dec-83 1124 EMMA@SRI-AI.ARPA directory
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Dec 83 11:24:31 PST
Date: Mon 19 Dec 83 11:27:12-PST
From: Emma Pease <EMMA@SRI-AI.ARPA>
Subject: directory
To: almog@SRI-AI.ARPA, appelt@SRI-AI.ARPA, bach-hong@SRI-AI.ARPA,
bmacken@SRI-AI.ARPA, bresnan@SRI-AI.ARPA, chappell@SRI-AI.ARPA,
eric@SRI-AI.ARPA, gardenfors@SRI-AI.ARPA, hans@SRI-AI.ARPA,
hobbs@SRI-AI.ARPA, igoni@SRI-AI.ARPA, jmc-lists@SU-AI.ARPA,
kay@PARC-MAXC.ARPA, kells@SRI-AI.ARPA, konolige@SRI-AI.ARPA,
lauri@SRI-AI.ARPA, pcohen@SRI-KL.ARPA, pkanerva@SUMEX-AIM.ARPA,
pollard%hp-hulk.hp-labs@RAND-RELAY.ARPA, sgf@SU-AI.ARPA,
shieber@SRI-AI.ARPA
cc: emma@SRI-AI.ARPA
I still have not received your replies for the directory; please send
them as soon as possible so the directory can go out.
Emma
Dear CSLI-FOLKS:
We are assembling a CSLI-FOLKS DIRECTORY, which will include work
title, address and phone; ARPANet address; home address and phone
(optional).
We realize that we already have some information but wish to ensure
the accuracy of the directory by double checking, so please complete
all the non-optional entries on the form.
We hope this directory will be useful to you in communicating with
other CSLI folks, including those not on the NET. We appreciate
your response so that our directory can be as complete as possible.
As soon as the input is finished, copies will be available in the
lobby at Ventura. For questions, contact (Emma@sri-ai) or Emma Pease
at (415) 497-0939. Thanks for your cooperation.
1) NAME: 2) NICKNAME(optional):
3) NET ADD: 4) ALT NET ADD:
5) TITLE: 6) ALT TITLE:
7) WORK ADD: 8) ALT WORK ADD:
9) WORK PH: 10) ALT WORK PH:
12) HOME ADD(optional):
13) HOME PH(optional):
-------
∂19-Dec-83 1736 BMOORE@SRI-AI.ARPA soliciting postdoc applications
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Dec 83 17:36:31 PST
Date: Mon 19 Dec 83 17:36:08-PST
From: Bob Moore <BMOORE@SRI-AI.ARPA>
Subject: soliciting postdoc applications
To: csli-folks@SRI-AI.ARPA
I'd like to get your help in the process of recuiting postdocs. We
have a very nice poster announcing the availability of postdocs for
next year, and we have sent most of them out to U.S. colleges and
universities. We have between 100 and 150 left to send to foreign
schools. This is far too few to get the kind of widespread publicity
we might like to have, so I would greatly appreciate it if everyone
(perhaps in small groups) would send to me and Emma a list of places
outside the U.S. in your respective fields that we should not fail to
cover. It is vital that we get the posters out as quickly as
possible, so please respond right away. The members of the postdoc
committee are ESPECIALLY urged to take this seriously.
Thanks,
Bob
-------
∂20-Dec-83 1349 GROSZ@SRI-AI.ARPA Visitor: David Israel, BBN
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Dec 83 13:49:06 PST
Date: Tue 20 Dec 83 13:49:32-PST
From: Barbara J. Grosz <GROSZ@SRI-AI.ARPA>
Subject: Visitor: David Israel, BBN
To: dkanerVA@SRI-AI.ARPA, riggs@SRI-AI.ARPA
cc: csli-folks@SRI-AI.ARPA
Diane--
Would you please put a notice in the next Newsletter saying that David
Israel (from BBN) will be visiting the week of January 16-20. Anyone
who wants a chance to meet with him should let Sandy know--preferably
by netmail (riggs@sri-ai)--she'll arrange a schedule later. No
formal presentations are planned.
thanks
Barbara
-------
∂20-Dec-83 1739 BMOORE@SRI-AI.ARPA long term visitors
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Dec 83 17:39:44 PST
Date: Tue 20 Dec 83 17:40:03-PST
From: Bob Moore <BMOORE@SRI-AI.ARPA>
Subject: long term visitors
To: csli-folks@SRI-AI.ARPA
You may have noticed that the postdoc committee has in fact been
designated the "postdoc and long term visitors" committee. However,
after discussions with Jon, it has been decided that the committee
should play only a minor role with respect to the latter group. The
reason is that funding long term visitors is inextricably linked to
the budgetary constraints of the different areas, so it does not seem
to make sense for this committee to try to make decisions on visitors
that the people within each area will undoubtedly want to make for
themselves. Decisions on visitors, then, will be made at the area
level, so requests should be directed to the appropriate area and
project managers. The postdoc and visitors committee will simply act
as a clearing house, directing unsolicited inquiries to the relevant
area managers.
--Bob
-------
∂21-Dec-83 0613 LAWS@SRI-AI.ARPA AIList Digest V1 #115
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Dec 83 06:12:38 PST
Date: Tue 20 Dec 1983 21:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #115
To: AIList@SRI-AI
AIList Digest Wednesday, 21 Dec 1983 Volume 1 : Issue 115
Today's Topics:
Neurophysics - Left/Right-Brain Citation Request,
Knowledge Representation,
Science & Computer Science & Expert Systems,
Science - Definition,
AI Funding - New Generation Computing
----------------------------------------------------------------------
Date: 16 Dec 83 13:10:45-PST (Fri)
From: decvax!microsoft!uw-beaver!ubc-visi!majka @ Ucb-Vax
Subject: Left / Right Brain
Article-I.D.: ubc-visi.571
From: Marc Majka <majka@ubc-vision.UUCP>
I have heard endless talk, and read endless numbers of magazine-grade
articles about left / right brain theories. However, I have not seen a
single reference to any scientific evidence for these theories. In fact,
the only reasonably scientific discussion I heard stated quite the opposite
conclusion about the brain: That although it is clear that different parts
of the brain are associated with specific functions, there is no logical
(analytic, mathematical, deductive, sequential) / emotional (synthetic,
intuitive, inductive, parallel) pattern in the hemispheres of the brain.
Does anyone on the net have any references to any studies that have been
done concerning this issue? I would appreciate any directions you could
provide. Perhaps, to save the load on this newsgroup (since this is not an
AI question), it would be best to mail directly to me. I would be happy to
post a summary to this group.
Marc Majka - UBC Laboratory for Computational Vision
------------------------------
Date: 15 Dec 83 20:12:46-PST (Thu)
From: decvax!wivax!linus!utzoo!watmath!watdaisy!rggoebel @ Ucb-Vax
Subject: Re: New Topic (technical) - (nf)
Article-I.D.: watdaisy.362
Bob Kowalski has said that the only way to represent knowledge is
using first order logic. ACM SIGART Newsletter No. 70, February 1980
surveys many of the people in the world actually doing representation
research, and few of them agree with Kowalski. Is there anyone out
there than can substantiate a claim for actually ``representing'' (what
ever that means) ``knowledge?'' Most of the knowledge representation
schemes I've seen are really deductive information description languages
with quasi-formal extensions. I don't have a good definition of what
knowledge is...but ask any mathematical logician (or mathematical
philosopher) what they think about calling something like KRL a
knowledge representation language.
Randy Goebel
Logic Programming Group
University of Waterloo
Waterloo, Ontario, CANADA N2L 3G1
------------------------------
Date: 13 Dec 83 8:14:51-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!security!genrad!wjh12!foxvax1!br
unix!jah @ Ucb-Vax
Subject: Re: RE: Expert Systems
Article-I.D.: brunix.5992
I don't understand what the "size" of a program has to do with anything.
The notion that size is important seems to support the idea that the
word "science" in "computer science" belongs in quote marks. That is,
that CS is just a bunch of hacks anyhow.
The theory folks, whom I think most of us would call computer scientists,
write almost no programs. Yet, I'd say their contribution to CS is
quite important (who analyzed the sorting algorithm you used this morning?)
At least some parts of AI are still Science (with a capital "S"). We are
exploring issues involving cognition and memory, as well as building the
various programs that we call "expert systems" and the like. Pople's group,
for example, are examining how it is that expert doctors come to make
diagnoses. He is interested in the computer application, but also in the
understanding of the underlying process.
Now, while we're flaming, let me also mention that some AI programs have
been awfully large. If you are into the "bigger is better" mentality, I
suggest a visit to Yale and a view of some of the language programs there.
How about FRUMP, which in its 1978 version took up three processes each
using over 100K of memory, the source code was several hundred pages, and
it contained word definitions for over 10,000 words. A little bigger
than Haunt??
Pardon all this verbiage, but I think AI has shown itself both on
the scientific level, by contributions to the field of psychology,
(and linguistics for that matter) and by contributions to the state of
the art in computer technology, and also in the engineering level, by
designing and building some very large programs and some new
programming techniques and tools.
-Jim Hendler
------------------------------
Date: 19 Dec 1983 15:00-EST
From: Robert.Frederking@CMU-CS-CAD.ARPA
Subject: Re: Math as science
Actually, my library's encyclopedia says that mathematics isn't
a science, since it doesn't study phenomena, but rather is "the
language of science". Perhaps part of the fuzziness about
AI-as-science is that we are creating most of the phenomena we are
studying, and the more theoretical components of what we are doing look
a lot like mathematical logic, which isn't a science.
------------------------------
Date: Mon, 19 Dec 1983 10:21:47 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications Mgr.)
Subject: Defining "Science"
For better or worse, there really isn't such a thing as a prototypical
science. The meaning of the word 'science' has always been different in
different realms of discourse: what the "average American" means by the term
differs from what a physicist means, and neither of them would agree with an
individual working in one of the 'softer' fields.
This is not something we want to change, in my view. The belief that
there must be one single standardized definition for a very general term is
not a useful one, especially when the term is one that does not describe a
explicit, material thing (e.g., blood, pencil, etc.). Abstract terms are
always dependent on the social context of their use for their definition; it's
just that academics often forget (or fail to note) that contexts other than
their own fields exist.
Even if we try and define science in terms of its usage of the "scientific
method," we find that there's no clear definition. If you've yet to read it,
I strongly urge you to take a look at Kuhn's "The Structure of Scientific
Revolutions," which is one of the most important books written about science.
He looks at what the term has meant, and does mean, in various disciplines
at various periods, and examines very carefully how the definitions were, in
reality, tied to other socially-defined notions. It's a seminal work in the
study of the history and sociology of science.
The social connotations of words like science affect us all every day.
In my personal opinion, one of the major reasons why the term 'computer
science' is gaining popularity within academia is that it dissociates the
field from engineering. The latter field has, at least within most Western
cultures, a social stigma of second-class status attached to it, precisely
because it deals with mundane reality (the same split, of course, comes up
twixt pure and applied mathematics). A good book on this, by the way, is
Samuel Florman's "The Existential Pleasures of Engineering"; his more recent
volume, "Blaming Technology", is also worth your time.
--Dave Axler
------------------------------
Date: Fri 16 Dec 83 17:32:56-PST
From: Al Davis <ADavis at SRI-KL>
Subject: Re: AIList Digest V1 #113
In response to the general feeling that Gee the Japanese are good guys
and the Americans are schmucks and war mongers view, and as a member of
one of the planning groups that wrote the DARPA SC plan, I offer the
following questions for thought:
1. If you were Bob Kahn and were trying to get funding to permit
continued growth of technology under the Reagan administration, would
you ask for $750 million and say that you would do things in such a
way as to prevent military use?
2. If it were not for DARPA how would we be reading and writing all
this trivia on the ARPAnet?
3. If it were not for DARPA how many years (hopefully fun, productive,
and challenging) would have been fundamentally different?
4. Is it possible that the Japanese mean "Japanese society" when they
target programs for "the good of ?? society"?
5. Is it really possible to develop advanced computing technology that
cannot be applied to military problems? Can lessons of
destabilization of the US economy be learned from the automobile,
steel, and TV industries?
6. It is obvious that the Japanese are quick to take, copy, etc. in
terms of technology and profit. Have they given much back? Note: I like
my Sony TV and Walkman as much as anybody does.
7. If DARPA is evil then why don't we all move to Austin and join MCC
and promote good things like large corporate profit?
8. Where would AI be if DARPA had not funded it?
Well the list could go on, but the direction of this diatribe is
clear. I think that many of us (me too) are quick to criticize and
slow to look past the end of our noses. One way to start to improve
society is to climb down off the &%↑$&↑ ivory tower ourselves. I for
one have no great desire to live in Japan.
Al Davis
ADAVIS @ SRI-KL
------------------------------
Date: Tue, 20 Dec 1983 09:13 EST
From: HEWITT%MIT-OZ@MIT-MC.ARPA
Subject: New Generation computing: Japanese and U.S. motivations
Ron,
I believe that you have painted a misleading picture of a complex situation.
From talking to participants involved, I believe that MITI is
funding the Japanese Fifth Generation Project primarily for commercial
competitive advantage. In particular they hope to compete with IBM
more effectively than as plug-compatible manufacturers. MITI also
hopes to increase Japanese intellectual prestige. Congress is funding
Strategic Computing to maintain and strengthen US military and
commercial technology. A primary motivation for strengthening the
commercial technology is to meet the Japanese challenge.
------------------------------
Date: 20 Dec 83 20:41:06 PST (Tuesday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Re: New Generation computing: Japanese and U.S. motivations
Are we really in disagreement?
It seems pretty clear from my quotes, and from numerous writings on the
subject, that the Japanese intend to use the Fifth Generation Project to
strengthen their position in commercial markets. We don't disagree
there.
It also seems clear that, as you say, "Congress is funding a project
called Strategic Computing to maintain and strengthen US military and
commercial technology." That should be parsed as "Military technology
first, with hopes of commercial spinoff."
If you think that's a misleading distortion, read the DARPA Strategic
Computing Report. Pages 21 through 29 contain detailed specifications
of the requirements of three specific military applications. There is
no equivalent specification of non-military application
requirements--only a vague statement on page 9 that commercial spinoffs
will occur. Military requirements and terminology permeate the entire
report.
If the U.S. program is aimed at military applications, that's what it
will produce. Any commercial or industrial spinoff will be incidental.
If we are serious about strengthening commercial computer technology,
then that's what we should be aiming for. As you say, that's certainly
what the Japanese are aiming for.
Isn't it about time that we put our economic interests first, and the
military second?
/Ron
------------------------------
End of AIList Digest
********************
∂21-Dec-83 1017 BMOORE@SRI-AI.ARPA Re: soliciting postdoc applications
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Dec 83 10:17:26 PST
Date: Wed 21 Dec 83 10:15:28-PST
From: Bob Moore <BMOORE@SRI-AI.ARPA>
Subject: Re: soliciting postdoc applications
To: csli-folks@SRI-AI.ARPA
cc: BMOORE@SRI-AI.ARPA
In-Reply-To: Message from "Bob Moore <BMOORE@SRI-AI.ARPA>" of Mon 19 Dec 83 17:36:13-PST
We've gotten a number of responses to the request for lists of foreign
colleges and universities to send the postdoc poster to. If you
haven't responded yet, please do. The suggestions we have so far are
in <CSLI>FOREIGN.LIST, so you might look at that to see what is
missing. Also, please include a specific person, department, or
laboratory, so the poster will get to the right place.
--Bob
-------
∂22-Dec-83 2213 LAWS@SRI-AI.ARPA AIList Digest V1 #116
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Dec 83 22:12:58 PST
Date: Thu 22 Dec 1983 19:37-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #116
To: AIList@SRI-AI
AIList Digest Friday, 23 Dec 1983 Volume 1 : Issue 116
Today's Topics:
Optics - Request for Camera Design,
Neurophysiology - Split Brain Research,
Expert Systems - System Size,
AI Funding - New Generation Computing,
Science - Definition
----------------------------------------------------------------------
Date: Wed, 21 Dec 83 14:43:29 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: REFERENCES FOR SPECIALIZED CAMERA DESIGN USING FIBER OPTICS
In a conventional TV camera, the image falls upon a staring
array of transducers. The problem is that it is very difficult to
get very close to the focal point of the optical system using this
technology.
I am looking for a design of a camera imaging system
that projects the light image onto a fiber optic bundle.
The optical fibers are used to transport the light falling upon
each pixel away from the camera focal point so that the light
may be quantitized.
I'm sure that such a system has already been designed, and
I would greatly appreciate any references that would be appropriate
to this type of application. I need to computer model such a system,
so the pertinent optical physics and related information would be
MOST useful.
If there are any of you that might be interested in this
type of camera system, please contact me. It promises to provide
the degree of resolution which is a constraint in many vision
computations.
Visually yours,
Philip Kahn
------------------------------
Date: Wed 21 Dec 83 11:38:36-PST
From: Richard F. Lyon <DLyon at SRI-KL>
Subject: Re: AIList Digest V1 #115
In reply to <majka@ubc-vision.UUCP> on left/right brain research:
Most of the work on split brains and hemispheric specialization
has been done at Caltech by Dr. Roger Sperry and colleagues. The 1983
Caltech Biology annual report has 5 pages of summary results, and 11
recent references by Sperry's group. Previous year annual reports
have similar amounts. I will mail copies if given an address.
Dick Lyon
DLYON@SRI-KL
------------------------------
Date: Wednesday, 21 December 1983 13:48:54 EST
From: John.Laird@CMU-CS-H
Subject: Haunt and other production systems.
A few facts on productions systems.
1. Haunt consists of 1500 productions and requires 160K words of memory on a
KL10. (So Frumps is a bit bigger than Haunt.)
2. Expert systems (R1, XSEL and PTRANS) written in a similar language
consist of around 1500-2500 productions.
3. An expert system to perform VLSI design (DAA) consists of around 200
productions.
------------------------------
Date: 19 Dec 83 17:37:56-PST (Mon)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Re: Humanistic Japanese vs. Military Americans
Article-I.D.: dartvax.536
Does anyone know of any groups doing serious AI in the U.S. or Europe
that emulate the Japanese attitude?
--Lorien
------------------------------
Date: Wed 21 Dec 83 13:04:21-PST
From: Andy Freeman <ANDY@SU-SCORE.ARPA>
Subject: Re: AIList Digest V1 #115
"If the U.S. program is aimed at military applications, that's what it
will produce. Any commercial or industrial spinoff will be
incidental."
It doesn't matter what DoD and the Japanese project aim for. We're
not talking about a spending a million on designing bullets but
something much more like the space program. The meat of that
specification was "American on Moon with TV camera" but look what else
happened. Also, the goal was very low volume, but many of the
products aren't.
Hardware, which is probably the majority of the specification, could
be where the crossover will be greatest. Even if they fail to get "a
lisp machine in every tank", they'll succeed in making one for an
emergency room. (Camping gear is a recent example of something
similar.) Yes, they'll be able to target software applications, but
at least the tools, skills, and people move. What distinguishes a US
Army database system anyway?
I can understand the objection that the DoD shouldn't have "all those
cycles", but that isn't one of the choices. (How they are to be used
is, but not through the research.) The new machines are going to be
built - if nothing else the Dod can use Japanese ones. Even if all
other things were equal, I don't think the economic ones are, why
should they have all the fun?
-andy
------------------------------
Date: Wednesday, 21 December 1983, 19:27-EST
From: Hewitt at MIT-MC
Subject: New Generation Computing: Japanese and U.S. motivations
Ron,
For better or worse, I do not believe that you can determine what will
be the motivations or structure of either the MITI Fifth Generation
effort or the DARPA Strategic Computing effort by citing chapter and
verse from the two reports which you have quoted.
/Carl
------------------------------
Date: Wed, 21 Dec 83 22:55:04 EST
From: BRINT <abc@brl-bmd>
Subject: AI Funding - New Generation Computing
It seems to me that intelligent folks like AIList readers
should realize that the only reason Japan can fund peaceful
and humanitarian research to the exclusion of
military projects is that the United States provides the
military protection and security guarantees (out of our own
pockets) that make this sort of thing possible.
(I believe Al Davis said it well in the last Digest.)
------------------------------
Date: 22 Dec 83 13:52:20 EST
From: STEINBERG@RUTGERS.ARPA
Subject: Strategic Computing: Defense vs Commerce
Yes, it is a sad fact about American society that a project like
Strategic Computing will only be funded if it is presented as a
defense issue rather than a commercial/economic one. (How many people
remember that the original name for the Interstate Highway system had
the word "Defense" in it?) This is something we can and
should work to change, but I do not believe that it is the kind of
thing that can be changed in a year or two. So, we are faced with the
choice of waiting until we change society, or getting the AI work done
in a way that is not perfectly optimal for producing
commercial/economic results.
It should be noted that achieving the military goals will require very
large advances in the underlying technology that will certainly have
very large effects on non-military AI. It is not just a vague hope
for a few spinoffs. So while doing it the DOD way may not be optimal
it is not horrendously sub-optimal.
There is, of course, a moral issue of whether we want the military to
have the kinds of capabilities implied by the Strategic Computing
plan. However, if the answer is no then you cannot do the work under
any funding source. If the basic technology is achieved in any way,
then the military will manage to use it for their purposes.
------------------------------
Date: 18 Dec 83 19:47:50-PST (Sun)
From: pur-ee!uiucdcs!parsec!ctvax!uokvax!andree @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: uiucdcs.4598
The definitions of Science that were offered, in defense of
"computer Science" being a science, were just irrelevant.
A field can lay claim to Science, if it uses the "scientific method"
to make advances, that is:
Hypotheses are proposed.
Hypotheses are tested by objective experiments.
The experiments are objectively evaluated to prove or
disprove the hypotheses.
The experiments are repeatable by other people in other places.
- Keremath, care of:
Robison
decvax!ittvax!eosp1
or: allegra!eosp1
I have to disagree. Your definition of `science' excludes at least one
thing that almost certainly IS a science: astronomy. The major problem
here is that most astronomers (all extra-solar astronomers) just can not
do experiments. Which is why they call it `obervational astronomy.'
I would guess what is needed is three (at least) flavors of science:
1) experimental sciences: physics, chemistry, biology, psychology.
Any field that uses the `scientific method.'
2) observational sciences: astronomy, sociology, etc. Any field that,
for some reason or another, must be satisfied with observing
phenomena, and cannot perform experiments.
3) ? sciences: mathematics, some cs, probably others. Any field that
explores the universe of the possible, as opposed to the universe of
the actual.
What should the ? be? I don't know. I would tend to favor `logical,' but
something tells me a lot of people will object.
<mike
------------------------------
Date: 21 Dec 1983 14:36-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest V1 #115
Th reference to Kuhn's 'The Structure of Scientific Revolutions'
is appreciated, but if you take a good look at the book itself, you
will find it severely lacking in scientific practice. Besides being
palpably inconsistent, Kuhn's book claims several facts about history
that are not correct, and uses them in support of his arguments. One of
his major arguments is that historians rewrite the facts, thus he acted
in this manner to rewrite facts to support his contentions. He defined
a term 'paradigm' inconsistently, and even though it is in common use
today, it has not been consistently defined yet. He also made several
other inconsistent definitions, and has even given up this view of
science (if you bother to read the other papers written after his book).
It just goes to show you, you shouldn't believe everything you read,
Fred
------------------------------
End of AIList Digest
********************
∂27-Dec-83 1354 ELYSE@SU-SCORE.ARPA Christensen Fellowships for Senior Faculty at St. Catherine's
Received: from SU-SCORE by SU-AI with TCP/SMTP; 27 Dec 83 13:53:53 PST
Date: Tue 27 Dec 83 13:52:53-PST
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Christensen Fellowships for Senior Faculty at St. Catherine's
To: faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
I ENCLOSE A MEMO WHICH I RECEIVED. I WAS A FELLOW AT ST. CATH'S AND THE FOOD WAS TERRIFIC.
GENE
From: Office of the Vice Provost and Dean of Graduate Studies and Research
Subject: Christensen Fellowships for Senior Faculty at St. Catherine's College, Oxford University
St. Catherine's College, Oxford University, has established a program for visiting fellowships open to distinguished scholars of international reputation.
The Fellows may visit Oxford for periods of not less than a term (two months)
and not more than a year.
In the academic year October 1984-July 1985, St. Catherine's has available two such fellowships. One is restricted to applicants in Engineering and other
applied sciences, including the medical sciences, and the other is open to applicants in any field.
The deadline for receipt of applications for the 1984-1985 Fellowships is Feb. 1, 1984. Applications should include a brief account of the work that is proposed to be pursued at Oxford, and the names of two referees to whom the College can write.
Faculty who may be interested in applying should write directly to: The Master, St. Catherine's College, Oxford OX1 2UJ, United Kingdom.
The Fellowships have been established with a benefaction from Mr. Allen
Christensen (after whom they are named) and provide for two or three visitors
each year. Without limiting its freedom of choice, St. Catherine's College
will give a degree of preference to applicants from Stanford University so
please bring this to the attention of your faculty. The past term Professor
Peter Stansky and Professor Gene Golub were visiting Fellows of St. Catherine's.
The Fellowships do not carry a stipend but provide membership of St. Catherine's Senior Common Room; the right of common table (i.e. to take meals free in the
College) and furnished accommodation, rent free, in newly converted apartments
in a separate house. Travel expenses are not covered.
Please let this office know if you have any successful candidates.
-------
∂27-Dec-83 1822 GOLUB@SU-SCORE.ARPA Faculty meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 27 Dec 83 18:22:04 PST
Date: Tue 27 Dec 83 18:21:27-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Faculty meeting
To: faculty@SU-SCORE.ARPA
The first faculty meeting of the year will be on Tuesday, Jan 10 at
2:30. Let me know if you have any agenda items.
GENE
-------
∂27-Dec-83 1825 GOLUB@SU-SCORE.ARPA Senior Faculty Meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 27 Dec 83 18:25:01 PST
Date: Tue 27 Dec 83 18:25:05-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Senior Faculty Meeting
To: CSD-Senior-Faculty: ;
I would like to schedule a senior faculty meeting on Tuesday , Jan 17
at 2:30. Let me know if that is convenient with you.
GENE
-------
∂28-Dec-83 1054 EMMA@SRI-AI.ARPA recycling
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Dec 83 10:54:43 PST
Date: Wed 28 Dec 83 10:48:20-PST
From: Emma Pease <EMMA@SRI-AI.ARPA>
Subject: recycling
To: csli-folks@SRI-AI.ARPA
We now have an aluminum recycling bin by the coke machine; it is not
to be used as a garbage can for candy wrappers nor as a bin for tin
cans. Please use it.
Thank-you for using the paper recycling bin and please remember it is
in the printing room.
Emma
-------
∂28-Dec-83 1235 KJB@SRI-AI.ARPA Claire
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Dec 83 12:35:27 PST
Date: Wed 28 Dec 83 12:31:57-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Claire
To: csli-folks@SRI-AI.ARPA
Claire arrived at 8:40 this morning, weighing 8 lb 14 oz, and
standing 1 ' 8 " tall. She and Mary Ellen are both well and
happy. Jon
-------
∂28-Dec-83 1237 BMOORE@SRI-AI.ARPA Jeremy William Moore
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Dec 83 12:37:08 PST
Date: Wed 28 Dec 83 11:23:13-PST
From: Bob Moore <BMOORE@SRI-AI.ARPA>
Subject: Jeremy William Moore
To: AIC-Staff: ;
cc: sidner@BBNC.ARPA, mitch@MIT-OZ.MIT-CHAOS, sussman@MIT-OZ.MIT-CHAOS
ReSent-date: Wed 28 Dec 83 12:34:10-PST
ReSent-from: Bob Moore <BMOORE@SRI-AI.ARPA>
ReSent-to: csli-folks@SRI-AI.ARPA
Born: December 24, 1983.
Weight: 8 lbs., 2 oz.
Mother and son both doing quite well.
--Bob
-------
∂29-Dec-83 1034 EMMA@SRI-AI.ARPA Directory
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Dec 83 10:34:01 PST
Date: Thu 29 Dec 83 10:31:43-PST
From: Emma Pease <EMMA@SRI-AI.ARPA>
Subject: Directory
To: pkanerva@SUMEX-AIM.ARPA, almog@SRI-AI.ARPA, bresnan@PARC-MAXC.ARPA,
chappell@SRI-AI.ARPA, Eric@SRI-AI.ARPA, Gardenfors@SRI-AI.ARPA,
Hobbs@SRI-AI.ARPA, Igoni@SRI-AI.ARPA, Jmc-lists@SU-AI.ARPA,
kells@SRI-AI.ARPA, Lauri@SRI-AI.ARPA
cc: emma@SRI-AI.ARPA
You have not yet filled out the following form. Please do so and
return to me as soon as possible.
We realize that we already have some information but wish to ensure
the accuracy of the directory by double checking, so please complete
all the non-optional entries on the form.
We hope this directory will be useful to you in communicating with
other CSLI folks, including those not on the NET. We appreciate
your response so that our directory can be as complete as possible.
As soon as the input is finished, copies will be available in the
lobby at Ventura. For questions, contact (Emma@sri-ai) or Emma Pease
at (415) 497-0939. Thanks for your cooperation.
1) NAME: 2) NICKNAME(optional):
3) NET ADD: 4) ALT NET ADD:
5) TITLE: 6) ALT TITLE:
7) WORK ADD: 8) ALT WORK ADD:
9) WORK PH: 10) ALT WORK PH:
12) HOME ADD(optional):
13) HOME PH(optional):
-------
I do not fill out forms for organizations that already have the information
except under penalty of law.
∂30-Dec-83 0322 LAWS@SRI-AI.ARPA AIList Digest V1 #117
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Dec 83 03:22:32 PST
Date: Thu 29 Dec 1983 23:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V1 #117
To: AIList@SRI-AI
AIList Digest Friday, 30 Dec 1983 Volume 1 : Issue 117
Today's Topics:
Reply - Fiber Optic Camera,
Looping Problem - Loop Detection and Classical Psychology,
Logic Programming - Horn Clauses, Disjunction, and Negation,
Alert - Expert Systems & Molecular Design,
AI Funding - New Generation Discussion,
Science - Definition
----------------------------------------------------------------------
Date: 23 Dec 1983 11:59-EST
From: David.Anderson@CMU-CS-G.ARPA
Subject: fiber optic camera?
The University of Pittsburgh Observatory is experimenting with just
such an imaging system in one of their major projects, trying to
(indirectly) observe planetary systems around nearby stars. They claim
that the fiber optics provide so much more resolution than the
photography they used before that they may well succeed. Another major
advantage to them is that they have been able to automate the search;
no more days spent staring at photographs.
--david
------------------------------
Date: Fri 23 Dec 83 12:01:07-EST
From: Michael Rubin <RUBIN@COLUMBIA-20.ARPA>
Subject: Loop detection and classical psychology
I wonder if we've been incorrectly thinking of the brain's loop detection
mechanism as a sort of monitor process sitting above a train of thought,
and deciding when the latter is stuck in a loop and how to get out of it.
This approach leads to the problem of who monitors the monitor, ad
infinitum. Perhaps the brain detects loops in *hardware*, by classical
habituation. If each neuron is responsible for one production (more or
less), then a neuron involved in a loop will receive the same inputs so
often that it will get tired of seeing those inputs and fire less
frequently (return a lower certainty value), breaking the loop. The
detection of higher level loops such as "Why am I trying to get this PhD?"
implies that there is a hierarchy of little production systems (or
whatever), one for each chunk of knowledge. [Next question - how are
chunks formed? Maybe there's a low-level explanation for that too, having
to do with classical conditioning....]
BTW, I thought of this when I read some word or other so often that it
started looking funny; that phenomenon has gotta be a misfeature of loop
detection. Some neuron in the dictionary decides it's been seeing that damn
word too often, so it makes its usual definition less certain; the parse
routine that called it gets an uncertain definition back and calls for
help.
--Mike Rubin <Rubin@Columbia-20>
------------------------------
Date: 27 Dec 1983 16:30:08-PST
From: marcel.uiuc@Rand-Relay
Subject: Re: a trivial reasoning problem?
This is an elaboration of why a problem I submitted to the AIList seems
to be unsolvable using regular Horn clause logic, as in Prolog. First I'll
present the problem (of my own devising), then my comments, for your critique.
Suppose you are shown two lamps, 'a' and 'b', and you
are told that, at any time,
1. at least one of 'a' or 'b' is on.
2. whenever 'a' is on, 'b' is off.
3. each lamp is either on or off.
WITHOUT using an exhaustive generate-and-test strategy,
enumerate the possible on-off configurations of the two
lamps.
If it were not for the exclusion of dumb-search-and-filter solutions, this
problem would be trivial. The exclusion has left me baffled, even though
the problem seems so logical. Check me on my thinking about why it's so
difficult.
1. The first constraint (one or both lamps on) is not regular Horn clause
logic. I would like to be able to state (as a fact) that
on(a) OR on(b)
but since regular Horn clauses are restricted to at most one positive
literal I have to recode this. I cannot assert two independent facts
'on(a)', 'on(b)' since this suggests that 'a' and 'b' are always both
on. I can however express it in regular Horn clause form:
not on(b) IMPLIES on(a)
not on(a) IMPLIES on(b)
As it happens, both of these are logically equivalent to the original
disjunction. So let's write them as Prolog:
on(a) :- not on(b).
on(b) :- not on(a).
First, this is not what the disjunction meant. These rules say that 'a'
is provably on only when 'b' is not provably on, and vice versa, when in
fact 'a' could be on no matter what 'b' is.
Second, a question ?- on(X). will result in an endless loop.
Third, 'a' is not known to be on except when 'b' is not known to be on
(which is not the same as when 'b' is known to be off). This sounds as
if the closed-world assumption might let us get away with not being able
to prove anything (if we can't prove something we can always assume its
negation). Not so. We do not know ANYTHING about whether 'a' or 'b' are
on OR off; we only know about constraints RELATING their states. Hence
we cannot even describe their possible states, since that would require
filling in (by speculative hypothesis) the states of the lamps.
What is wanted is a non-regular Horn clause, but some of the nice
properties of Logic Programming (eg completeness and consistency under the
closed-world assumption, alias a reasonable negation operator) do not apply
to non-regular Horn clauses.
2. The second constraint (whenever 'a' is on, 'b' is off) shares some of the
above problems, and a new one. We want to say
on(a) IMPLIES not on(b), or not on(b) :- on(a).
but this is not possible in Prolog; we have to say it in what I feel to
be a rather contrived manner, namely
on(b) :- on(a), !, fail.
Unfortunately this makes no sense at all to a theoretician. It is trying
to introduce negative information, but under the closed-world assumption,
saying that something is NOT true is just the same as not saying it at all,
so the clause is meaningless.
Alternative: define a new predicate off(X) which is complementary to on(X).
That is the conceptualization suggested by the third problem constraint.
3. off(X) :- not on(X).
on(X) :- not off(X).
This idea has all the problems of the first constraint, including the
creation of another endless loop.
It seems this problem is beyond the capabilities of present-day logic
programming. Please let me know if you can find a solution, or if you think
my analysis of the difficulties is inaccurate.
Marcel Schoppers
U of Illinois at Urbana-Champaign
{pur-ee|ihnp4}!uiucdcs!marcel
------------------------------
Date: Mon 26 Dec 83 22:15:06-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: High Technology Articles
The January issue of High Technology has a fairly good introduction
to expert systems for commercial applications. As usual for this
magazine, there are corporate names and addresses and product
prices. The article mentions that there are probably fewer than
200 "knowledge engineers" in the country, most at universities
and think tanks; an AI postdoc willing to go into industry, but
with no industry experience, can command $70K.
The business outlook section is not the usual advice column
for investors, just a list of some well-known AI companies. The
article is also unusual in that it bases a few example of knowledge
representation and inference on the fragment BIRD IS-A MAMMAL.
Another interesting article is "Designing Molecules by Computer".
Several approaches are given, but one seems particularly pertinent
to the recent AIList discussion of military AI funding. Du Pont
researchers are studying how a drug homes in on its receptor site.
They use an Army program that generates line-of-sight maps for
TV-controlled antitank missiles to "fly" a drug in and observe how its
ability to track its receptor site on the enzyme surface is influenced
by a variety of force fields and solvent interactions. A different
simulation with a similar purpose uses robotic software for assembling
irregular components to "pick up" the drug and "insert" it in the
enzyme.
-- Ken Laws
------------------------------
Date: 23 December 1983 21:41 est
From: Dehn at MIT-MULTICS (Joseph W. Dehn III)
Subject: "comparable" quotes
Person at University of Tokyo, editor of a scientific/engineering
journal, says computers will be used to solve human problems.
Person at DARPA says computers will be used to make better weapons
("ways of killing people").
Therefore, Japanese are humane, Americans are warmongers.
Huh?
What is somebody at DARPA supposed to say is the purpose of his R&D
program? As part of the Defense Department, that agency's goal SHOULD
be to improve the defense of the United States. If they are doing
something else, they are wasting the taxpayer's money. There are
undoubtedly other considerations involved in DARPA's activities,
bureaucratic, economic, scientific, etc., but, nobody should be
astonished when an official statement of purpose states the official
purpose!
Assuming the nation should be defended, and assuming that advanced
computing can contribute to defense, it makes sense for the national
government to take an interest in advanced computing for defense. Thus,
the question should not be, "why do Americans build computers to kill
people", but rather why don't they, like the Japanese, ALSO, and
independent of defense considerations (which are, as has been pointed
out, different in Japan), build computers " to produce profitable
industrial products"?
Of course, before we try to solve this puzzle, we should first decide
that there is something to be solved. Is somebody suggesting that
because there are no government or quasi-government statements of
purpose, that Americans are not working on producing advanced and
profitable computer products? What ARE all those non-ARPA people doing
out there in netland, anyway? Where are IBM's profits coming from?
How can we meaningfully compare the "effort" being put into computer
research in Japan and the U.S.? Money? People? How about results?
Which country has produced more working AI systems (you pick the
definition of "working" and "AI")?
-jwd3
------------------------------
Date: 29 Dec 1983 09:11:34-PST
From: Mike Brzustowicz <mab@aids-unix>
Subject: Japan again.
Just one more note. Not only do we supply Japan's defense, but by treaty
they cannot supply their own (except for a very small national guard-type
force).
------------------------------
Date: 21 Dec 83 19:49:32-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: eosp1.466
I disagree - astronomy IS an experimental science. Even before the
age of space rockets, some celebrated astronomical experiments have
been performed. In astronomy, as in all sciences, one observes,
makes hypotheses, and then tries to verify the hypotheses by
observation. In chemistry and physics, a lot of attention is paid
to setting up an experiment, as well as observing the experiment;
in astronomy (geology as well!), experiments consist mostly
of observation, since there is hardly anything that people are capable
of setting up. Here are some pertinent examples:
(1) An experiment to test a theory about the composition of the sun has
been going on for several years. It consists of an attempt to trap
neutrinos from the sun in a pool of chlorine underground. The amount
of neutrinos detected has been about 1/4 of what was predicted, leading
to new suggestions about both the composition of the sun,
and (in particle physics) the physical properties of neutrinos.
(2) An experiment to verify Einstein's theory of relativity,
particularly the hypothesis that the presence of large masses curves
space (gravitational relativity) -- Measurements of Mercury's apparent
position, during an eclipse of the sun, were in error to a degree
consistent with Einstein's theory.
Obviously, Astronomical experiments will seem to lie half in the realm
of physics, since the theories of physics are the tools with which we
try to understand the skies.
Astronomers and physicists, please help me out here; I'm neither.
In fact, I don't even believe in neutrinos.
- Keremath, care of:
Robison
decvax!ittvax!eosp1
or: allegra!eosp1
------------------------------
Date: Thu, 29 Dec 83 15:44 EST
From: Hengst.WBST@PARC-MAXC.ARPA
Subject: Re: AIList Digest V1 #116
The flaming on the science component of computer science intrigues me
because it parallels some of the 1960's and 1970's discussion about the
science component of social science. That particular discussion, to
which Thomas Kuhn also contributed, also has not yet reached closure
which leaves me with the feeling that science might best be described as
a particular form of behavior by practitioners who possess certain
qualifications and engage in certain rituals approved by members of the
scientific tribe.
Thus, one definition of science is that it is whatever it is that
scientists do in the name of science ( a contextual and social
definition). Making coffee would not be scientific activity but reading
a professional book or entertaining colleagues with stimulating thoughts
and writings would be. From this perspective, employing the scientific
method is merely a particular form of engaging in scientific practice
without judging the outcome of that scientific practice. Relying upon
the scientific method by unlicensed practitioners would not result in
science but in lay knowledge. This means that authoritative statements
by members of scientific community are automatically given a certain
truth value. "Professor X says this", "scientific study Y demonstrates
that . . ." should all be considered as scientific statements because
they are issued as authorative statements in the name of science. This
interpretation of science discounts the role of Edward Teller as a
credible spokesman in the area of nuclear weapons policy in foreign
affairs.
The "licensing" of the practitioners derives from the formalization of
the training and education in the particular body of knowledge: eg. a
university degree is a form of license. Scientific knowledge can
differentiate itself from other forms of knowledge on the basis of
attempts (but not necesssarily success) at formalization. Physical
sciences study phenomena which lend themselves to better quantification
(they do have better metrics!) and higher levels of formalization. The
deterministic bodies of knowledge of the physical science allow for
better prediction than the heavily probabilistic bodies of knowledge of
the social science which facilitate explanation more so than prediction.
I am not sure if a lack of predictive power or lack of availability of
the scientific method (experimental design in its many flavors) makes
anyone less a scientist. The social sciences are rich in description and
insight which in my judgment compensates for a lack of hierarchical,
deductive formal knowledge.
From this point of view computer science is science if it involves
building a body of knowledge with attempts at formulating rules in some
consistent and verfiable manner by a body of trained practitioners.
Medieval alchemy also qualifies due to its apprenticeship program (rules
for admitting members) and its rules for building knowledge.
Fortunately, we have better rules now.
Acco
------------------------------
Date: Thu 29 Dec 83 23:38:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Philosophy of Science Discussion
I hate to put a damper on the discussion of Scientific Method,
but feel it is my duty as moderator. The discussion has been
intelligent and entertaining, but has strayed from the central
theme of this list. I welcome discussion of appropriate research
techniques for AI, but discussion of the definition and philosophy
of science should be directed to Phil-Sci@MIT-OZ. (Net.ai members
are free to discuss whatever they wish, of course, but I will
not pass further messages on this topic to the ARPANET readership.)
-- Ken Laws
------------------------------
End of AIList Digest
********************
∂01-Jan-84 1731 @MIT-MC:crummer@AEROSPACE Autopoietic Systems
Received: from MIT-MC by SU-AI with TCP/SMTP; 1 Jan 84 17:31:03 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 1 Jan 84 20:29-EST
Date: Sun, 1 Jan 84 17:26:26 PST
From: Charlie Crummer <crummer@AEROSPACE>
To: PHIL-SCI@MIT-MC
CC: GAVAN%MIT-OZ@MIT-MC,Tong.PA@PARC-MAXC
Subject: Autopoietic Systems
Charlie,
What are you after? I assume you've tossed out the concept of
autopoiesis to solicit reactions, but of what nature? Any or all of the
following might be what you want:
-------------------------------
What is an autopoietic system?
I would like to see what deductions and inductions we can come up with if we
define an autopoeitic system simply to be any system that produces itself. I
o not mean "produces itself out of nothing".
Surely "self-producing system" is inadequate, if only because that is
just as unclear as "autopoietic system". "[A cell] establishes and
I think you confuse narrowness of definition with clarity. This definition of
autopoietic seems to me to be an acceptable definition of what is probably a
broad category of systems.
maintains its own integrity from within." What does "integrity" mean?
The cell doesn't physically collapse? You surely don't mean the cell
does not exist in or depend upon an environment. I understand an
autopoietic system to be one that is *structure-coupled* to its
environment. You perhaps want a discussion of this term.
Integrity means identity in time and space. An amoeba is an autopoetic system
because it maintains itself in such a way that it can be distinguished from
its surroundings for as long as it is "alive" i.e. producing itself. The
"production" is a process consisting of changes in the system from moment to
moment; some sort of metabolism. (A rock is not autopoetic because there is no
production process at work that maintains its identity.)
Are autopoietic systems self-referential systems?
This is an interesting question. (I at least think so.) For a living organism
to be able to repair itself, for example, it has to somehow "take stock" of
itself in order to effect the repairs. The repair usually requires the import
of material and energy from the outside and results in a local decrease of
entropy.
You mention self-reference in your msg header, but make no further
reference to it.
Sorry, you are right. I notice that I have identified self-referential and
self-interactive in my mind. I think the concepts are closely related. In
order for a biological system to maintain itself it has to be able to refer to
itself (take stock), and effect the repairs (interact with itself). Quantum
gauge fields actually generates themselves; they are their own sources. They
do this by interacting with themselves. The gauge field is actually a hybrid
curvature tensor that, due to the non-commutativity of the group of gauge
potentials, has a term that contains the gauge potentials themselves. This
term appears in addition to the familiar curl of the four-vector gauge
potential.
Are human beings examples of autopoietic systems?
Yes, as biological systems anyway. I don't know whether it makes sense to say
that the mind generates itself or not. Maybe the mind is a process performed
by the biological machine. Can a process produce itself using the biological
system as a resource? I don't know what to think about that.
You mention cells, and speculate on organizations, but you left out an
extremely important intermediary example.
Consider as an intermediary example any living thing.
What can we gain by using the notion of autopoiesis?
I don't know. For me the notion gives rise to a lot of interesting questions
and a new way of looking at systems. Maybe from this vantage point we can see
solutions to some problems.
There would be no point in pursuing a discussion on autopoiesis if the
result would be like trying to define "intelligence" or "life".
I heartily agree. I (Maturana, Varela, Goguen, etc.) propose that we accept
the definition as I stated it and just see if that definition turns out to be
useful.
-------------------------------
Why don't you give preliminary answers to these questions, so we can
understand what manner of beast it is you wish us to study.
Chris
This is in reply to GAVAN's rejoinder of 28 Nov 1983
From: crummer at AEROSPACE (Charlie Crummer)
I have been reading lately about so-called "autopoietic" systems, i.e.
systems which produce themselves (they may also reproduce themselves but
that is something else). The concept comes from the biologists Humberto
Maturana, Fransisco Varela, and others. An example of an autopoietic
system
is a living cell. It establishes and maintains its own integrity
from within. This is an interesting concept and may have use in
describing political and other organizational systems.
Maturana used to claim that autopoietic systems are "closed", that is,
(according to standard biological usage promulgated by von
Bertalanffy) they do not exchange matter and energy with their
environments. After hearing numerous disputes on this question at
conferences (my sources tell me), Maturana backed down. Autopoietic
systems are relatively closed, but certainly not completely. As
biological, living systems they are open. They exchange matter and
energy with their enviroments. An autopoietic system is certainly
a system that reproduces itself, but I doubt that it PRODUCES itself.
Do Maturana or Varela claim this? I've never read any such claim.
I agree with Maturana's backed-down position, that is I think that it is not
interesting to define autopoeitic systems as closed; bounded, yes. Maturana
DEFINES autopoeitic as self-producing. I apologize for not having the
reference in front of me. I had to return the book to the library and I can't
remember the exact title. It is a collection of papers on the subject. I
don't regard Maturana et al as gods so I think that our definition (we of
PHIL-SCI), just like theirs is good if it is useful, that's all.
As for the utility of self-reproduction in describing political and
other organizational systems, yes, there is interest in the concept
among some social scientists. Few, if any, of them would maintain
that any organization or state is a closed system, however. They
speak instead of the RELATIVE autonomy of the state, not complete
autonomy. In other words, there is certainly some amount of system
maintenance from within, but organizations are also susceptible to
(and responsive to) environmental pressures.
I can't think of an example of a political system reproducing itself. I can't
even come up with a meaning for reproduction for political systems. Political
systems like living systems are susceptible to their environment. I propose
that the issue of autonomy is not really relevent to real systems. No real
systems are truly autonomous.
The desire to show that a system (ANY system) is completely autonomous
is, in my view, just another attempt to revive the rationalist dogma
of the middle ages. Undoubtedly the best attempt was made by Kant
in *The Critique of Pure Reason*, but in order to do so he was
forced to posit a dualism (noumena vs. phenomena) that he already
knew (from his studies of Leibniz) was untenable. According to
Weldon's critique of The Critique (Oxford University Press, in the
1950s or 60s), Kant had been influenced by Locke's student Tetens.
See also P. F. Strawson's critique of Kant, *The Bounds of Sense*.
I agree. Let's be done with autonomical discussion.
I apologize for the delay in my reply.
--Charlie
∂01-Jan-84 1744 @MIT-MC:crummer@AEROSPACE Autopoietic Systems
Received: from MIT-MC by SU-AI with TCP/SMTP; 1 Jan 84 17:44:00 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 1 Jan 84 20:29-EST
Date: Sun, 1 Jan 84 17:26:26 PST
From: Charlie Crummer <crummer@AEROSPACE>
To: PHIL-SCI@MIT-MC
CC: GAVAN%MIT-OZ@MIT-MC,Tong.PA@PARC-MAXC
Subject: Autopoietic Systems
Charlie,
What are you after? I assume you've tossed out the concept of
autopoiesis to solicit reactions, but of what nature? Any or all of the
following might be what you want:
-------------------------------
What is an autopoietic system?
I would like to see what deductions and inductions we can come up with if we
define an autopoeitic system simply to be any system that produces itself. I
o not mean "produces itself out of nothing".
Surely "self-producing system" is inadequate, if only because that is
just as unclear as "autopoietic system". "[A cell] establishes and
I think you confuse narrowness of definition with clarity. This definition of
autopoietic seems to me to be an acceptable definition of what is probably a
broad category of systems.
maintains its own integrity from within." What does "integrity" mean?
The cell doesn't physically collapse? You surely don't mean the cell
does not exist in or depend upon an environment. I understand an
autopoietic system to be one that is *structure-coupled* to its
environment. You perhaps want a discussion of this term.
Integrity means identity in time and space. An amoeba is an autopoetic system
because it maintains itself in such a way that it can be distinguished from
its surroundings for as long as it is "alive" i.e. producing itself. The
"production" is a process consisting of changes in the system from moment to
moment; some sort of metabolism. (A rock is not autopoetic because there is no
production process at work that maintains its identity.)
Are autopoietic systems self-referential systems?
This is an interesting question. (I at least think so.) For a living organism
to be able to repair itself, for example, it has to somehow "take stock" of
itself in order to effect the repairs. The repair usually requires the import
of material and energy from the outside and results in a local decrease of
entropy.
You mention self-reference in your msg header, but make no further
reference to it.
Sorry, you are right. I notice that I have identified self-referential and
self-interactive in my mind. I think the concepts are closely related. In
order for a biological system to maintain itself it has to be able to refer to
itself (take stock), and effect the repairs (interact with itself). Quantum
gauge fields actually generates themselves; they are their own sources. They
do this by interacting with themselves. The gauge field is actually a hybrid
curvature tensor that, due to the non-commutativity of the group of gauge
potentials, has a term that contains the gauge potentials themselves. This
term appears in addition to the familiar curl of the four-vector gauge
potential.
Are human beings examples of autopoietic systems?
Yes, as biological systems anyway. I don't know whether it makes sense to say
that the mind generates itself or not. Maybe the mind is a process performed
by the biological machine. Can a process produce itself using the biological
system as a resource? I don't know what to think about that.
You mention cells, and speculate on organizations, but you left out an
extremely important intermediary example.
Consider as an intermediary example any living thing.
What can we gain by using the notion of autopoiesis?
I don't know. For me the notion gives rise to a lot of interesting questions
and a new way of looking at systems. Maybe from this vantage point we can see
solutions to some problems.
There would be no point in pursuing a discussion on autopoiesis if the
result would be like trying to define "intelligence" or "life".
I heartily agree. I (Maturana, Varela, Goguen, etc.) propose that we accept
the definition as I stated it and just see if that definition turns out to be
useful.
-------------------------------
Why don't you give preliminary answers to these questions, so we can
understand what manner of beast it is you wish us to study.
Chris
This is in reply to GAVAN's rejoinder of 28 Nov 1983
From: crummer at AEROSPACE (Charlie Crummer)
I have been reading lately about so-called "autopoietic" systems, i.e.
systems which produce themselves (they may also reproduce themselves but
that is something else). The concept comes from the biologists Humberto
Maturana, Fransisco Varela, and others. An example of an autopoietic
system
is a living cell. It establishes and maintains its own integrity
from within. This is an interesting concept and may have use in
describing political and other organizational systems.
Maturana used to claim that autopoietic systems are "closed", that is,
(according to standard biological usage promulgated by von
Bertalanffy) they do not exchange matter and energy with their
environments. After hearing numerous disputes on this question at
conferences (my sources tell me), Maturana backed down. Autopoietic
systems are relatively closed, but certainly not completely. As
biological, living systems they are open. They exchange matter and
energy with their enviroments. An autopoietic system is certainly
a system that reproduces itself, but I doubt that it PRODUCES itself.
Do Maturana or Varela claim this? I've never read any such claim.
I agree with Maturana's backed-down position, that is I think that it is not
interesting to define autopoeitic systems as closed; bounded, yes. Maturana
DEFINES autopoeitic as self-producing. I apologize for not having the
reference in front of me. I had to return the book to the library and I can't
remember the exact title. It is a collection of papers on the subject. I
don't regard Maturana et al as gods so I think that our definition (we of
PHIL-SCI), just like theirs is good if it is useful, that's all.
As for the utility of self-reproduction in describing political and
other organizational systems, yes, there is interest in the concept
among some social scientists. Few, if any, of them would maintain
that any organization or state is a closed system, however. They
speak instead of the RELATIVE autonomy of the state, not complete
autonomy. In other words, there is certainly some amount of system
maintenance from within, but organizations are also susceptible to
(and responsive to) environmental pressures.
I can't think of an example of a political system reproducing itself. I can't
even come up with a meaning for reproduction for political systems. Political
systems like living systems are susceptible to their environment. I propose
that the issue of autonomy is not really relevent to real systems. No real
systems are truly autonomous.
The desire to show that a system (ANY system) is completely autonomous
is, in my view, just another attempt to revive the rationalist dogma
of the middle ages. Undoubtedly the best attempt was made by Kant
in *The Critique of Pure Reason*, but in order to do so he was
forced to posit a dualism (noumena vs. phenomena) that he already
knew (from his studies of Leibniz) was untenable. According to
Weldon's critique of The Critique (Oxford University Press, in the
1950s or 60s), Kant had been influenced by Locke's student Tetens.
See also P. F. Strawson's critique of Kant, *The Bounds of Sense*.
I agree. Let's be done with autonomical discussion.
I apologize for the delay in my reply.
--Charlie
∂01-Jan-84 1753 @MIT-MC:crummer@AEROSPACE Autopoietic Systems
Received: from MIT-MC by SU-AI with TCP/SMTP; 1 Jan 84 17:53:44 PST
Received: from MIT-MC by MIT-OZ via Chaosnet; 1 Jan 84 20:29-EST
Date: Sun, 1 Jan 84 17:26:26 PST
From: Charlie Crummer <crummer@AEROSPACE>
To: PHIL-SCI@MIT-MC
CC: GAVAN%MIT-OZ@MIT-MC,Tong.PA@PARC-MAXC
Subject: Autopoietic Systems
Charlie,
What are you after? I assume you've tossed out the concept of
autopoiesis to solicit reactions, but of what nature? Any or all of the
following might be what you want:
-------------------------------
What is an autopoietic system?
I would like to see what deductions and inductions we can come up with if we
define an autopoeitic system simply to be any system that produces itself. I
o not mean "produces itself out of nothing".
Surely "self-producing system" is inadequate, if only because that is
just as unclear as "autopoietic system". "[A cell] establishes and
I think you confuse narrowness of definition with clarity. This definition of
autopoietic seems to me to be an acceptable definition of what is probably a
broad category of systems.
maintains its own integrity from within." What does "integrity" mean?
The cell doesn't physically collapse? You surely don't mean the cell
does not exist in or depend upon an environment. I understand an
autopoietic system to be one that is *structure-coupled* to its
environment. You perhaps want a discussion of this term.
Integrity means identity in time and space. An amoeba is an autopoetic system
because it maintains itself in such a way that it can be distinguished from
its surroundings for as long as it is "alive" i.e. producing itself. The
"production" is a process consisting of changes in the system from moment to
moment; some sort of metabolism. (A rock is not autopoetic because there is no
production process at work that maintains its identity.)
Are autopoietic systems self-referential systems?
This is an interesting question. (I at least think so.) For a living organism
to be able to repair itself, for example, it has to somehow "take stock" of
itself in order to effect the repairs. The repair usually requires the import
of material and energy from the outside and results in a local decrease of
entropy.
You mention self-reference in your msg header, but make no further
reference to it.
Sorry, you are right. I notice that I have identified self-referential and
self-interactive in my mind. I think the concepts are closely related. In
order for a biological system to maintain itself it has to be able to refer to
itself (take stock), and effect the repairs (interact with itself). Quantum
gauge fields actually generates themselves; they are their own sources. They
do this by interacting with themselves. The gauge field is actually a hybrid
curvature tensor that, due to the non-commutativity of the group of gauge
potentials, has a term that contains the gauge potentials themselves. This
term appears in addition to the familiar curl of the four-vector gauge
potential.
Are human beings examples of autopoietic systems?
Yes, as biological systems anyway. I don't know whether it makes sense to say
that the mind generates itself or not. Maybe the mind is a process performed
by the biological machine. Can a process produce itself using the biological
system as a resource? I don't know what to think about that.
You mention cells, and speculate on organizations, but you left out an
extremely important intermediary example.
Consider as an intermediary example any living thing.
What can we gain by using the notion of autopoiesis?
I don't know. For me the notion gives rise to a lot of interesting questions
and a new way of looking at systems. Maybe from this vantage point we can see
solutions to some problems.
There would be no point in pursuing a discussion on autopoiesis if the
result would be like trying to define "intelligence" or "life".
I heartily agree. I (Maturana, Varela, Goguen, etc.) propose that we accept
the definition as I stated it and just see if that definition turns out to be
useful.
-------------------------------
Why don't you give preliminary answers to these questions, so we can
understand what manner of beast it is you wish us to study.
Chris
This is in reply to GAVAN's rejoinder of 28 Nov 1983
From: crummer at AEROSPACE (Charlie Crummer)
I have been reading lately about so-called "autopoietic" systems, i.e.
systems which produce themselves (they may also reproduce themselves but
that is something else). The concept comes from the biologists Humberto
Maturana, Fransisco Varela, and others. An example of an autopoietic
system
is a living cell. It establishes and maintains its own integrity
from within. This is an interesting concept and may have use in
describing political and other organizational systems.
Maturana used to claim that autopoietic systems are "closed", that is,
(according to standard biological usage promulgated by von
Bertalanffy) they do not exchange matter and energy with their
environments. After hearing numerous disputes on this question at
conferences (my sources tell me), Maturana backed down. Autopoietic
systems are relatively closed, but certainly not completely. As
biological, living systems they are open. They exchange matter and
energy with their enviroments. An autopoietic system is certainly
a system that reproduces itself, but I doubt that it PRODUCES itself.
Do Maturana or Varela claim this? I've never read any such claim.
I agree with Maturana's backed-down position, that is I think that it is not
interesting to define autopoeitic systems as closed; bounded, yes. Maturana
DEFINES autopoeitic as self-producing. I apologize for not having the
reference in front of me. I had to return the book to the library and I can't
remember the exact title. It is a collection of papers on the subject. I
don't regard Maturana et al as gods so I think that our definition (we of
PHIL-SCI), just like theirs is good if it is useful, that's all.
As for the utility of self-reproduction in describing political and
other organizational systems, yes, there is interest in the concept
among some social scientists. Few, if any, of them would maintain
that any organization or state is a closed system, however. They
speak instead of the RELATIVE autonomy of the state, not complete
autonomy. In other words, there is certainly some amount of system
maintenance from within, but organizations are also susceptible to
(and responsive to) environmental pressures.
I can't think of an example of a political system reproducing itself. I can't
even come up with a meaning for reproduction for political systems. Political
systems like living systems are susceptible to their environment. I propose
that the issue of autonomy is not really relevent to real systems. No real
systems are truly autonomous.
The desire to show that a system (ANY system) is completely autonomous
is, in my view, just another attempt to revive the rationalist dogma
of the middle ages. Undoubtedly the best attempt was made by Kant
in *The Critique of Pure Reason*, but in order to do so he was
forced to posit a dualism (noumena vs. phenomena) that he already
knew (from his studies of Leibniz) was untenable. According to
Weldon's critique of The Critique (Oxford University Press, in the
1950s or 60s), Kant had been influenced by Locke's student Tetens.
See also P. F. Strawson's critique of Kant, *The Bounds of Sense*.
I agree. Let's be done with autonomical discussion.
I apologize for the delay in my reply.
--Charlie
∂03-Jan-84 1129 SCHMIDT@SUMEX-AIM.ARPA HPP dolphins & 3600's unavailable Jan 4 (tomorrow)
Received: from SUMEX-AIM by SU-AI with TCP/SMTP; 3 Jan 84 11:29:24 PST
Date: Tue 3 Jan 84 11:31:54-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: HPP dolphins & 3600's unavailable Jan 4 (tomorrow)
To: HPP-Dolphins@SUMEX-AIM.ARPA, HPP-Lisp-Machines@SUMEX-AIM.ARPA
Nick tells me that tomorrow, Jan. 4, all systems in the
machine room at WR will be down, with the exception of the tips, for
electrical power rewiring. I hope this will not cause any major
inconviniencies. The electrical work will be finished and hopefully
will be up the next day.
--Christopher
-------
∂03-Jan-84 1823 LAWS@SRI-AI.ARPA AIList Digest V2 #1
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Jan 84 18:23:22 PST
Date: Tue 3 Jan 1984 15:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V2 #1
To: AIList@SRI-AI
AIList Digest Wednesday, 4 Jan 1984 Volume 2 : Issue 1
Today's Topics:
Administrivia - Host List & VISION-LIST,
Cognitive Psychology - Looping Problem,
Programming Languages - Questions,
Logic Programming - Disjunctions,
Vision - Fiber Optic Camera
----------------------------------------------------------------------
Date: Tue 3 Jan 84 15:07:27-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Host List
The AIList readership has continued to grow throughout the year, and only
a few individuals have asked to be dropped from the distribution network.
I cannot estimate the number of readers receiving AIList through bboards
and remailing nodes, but the existence of such services has obviously
reduced the outgoing net traffic. For those interested in such things,
I present the following approximate list of host machines on my direct
distribution list. Numbers in parentheses indicate individual subscribers;
all other hosts (and those marked with "bb") have redistribution systems.
A few of the individual subscribers are undoubtedly redistributing
AIList to their sites, and a few redistribution nodes receive the list
from other such nodes (e.g., PARC-MAXC from RAND-UNIX). AIList is
also available to USENET through the net.ai distribution system.
AEROSPACE(8), AIDS-UNIX, BBNA(2), BBNG(1), BBN-UNIX(8), BBN-VAX(3),
BERKELEY(3), BITNET@BERKELEY(2), ONYX@BERKELEY(1), UCBCAD@BERKELEY(2),
BRANDEIS(1), BRL(bb+1), BRL-VOC(1), BROWN(1), BUFFALO-CS(1),
cal-unix@SEISMO(1), CIT-20, CMU-CS-A(bb+11) CMU-CS-G(3),
CMU-CS-SPICE(1), CMU-RI-ISL1(1), COLUMBIA-20, CORNELL,
DEC-MARLBORO(7), EDXA@UCL-CS(1), GATECH, HI-MULTICS(bb+1),
CSCKNP@HI-MULTICS(2), SRC@HI-MULTICS(1), houxa@UCLA-LOCUS(1),
HP-HULK(1), IBM-SJ(1), JPL-VAX(1), KESTREL(1), LANL, LLL-MFE(2),
MIT-MC, NADC(2), NOSC(4), NOSC-CC(1), CCVAX@NOSC(3), NPRDC(2),
NRL-AIC, NRL-CSS, NSF-CS, NSWC-WO(2), NYU, TYM@OFFICE(bb+2),
RADC-Multics(1), RADC-TOPS20, RAND-UNIX, RICE, ROCHESTER(2),
RUTGERS(bb+2), S1-C(1), SAIL, SANDIA(bb+1), SCAROLINA(1),
sdcrdcf@UCBVAX(1), SRI-AI(bb+6), SRI-CSL(1), SRI-KL(12), SRI-TSC(3),
SRI-UNIX, SU-AI(2), SUMEX, SUMEX-AIM(2), SU-DSN, SU-SIERRA@SU-DSN(1),
SUNY-SBCS(1), SU-SCORE(11), SU-PSYCH@SU-SCORE(1), TEKTRONIX(1), UBC,
UCBKIM, UCF-CS, UCI, UCL-CS, UCLA-ATS(1), UCLA-LOCUS(bb+1),
UDel-Relay(1), UIUC, UMASS-CS, UMASS-ECE(1), UMCP-CS, UMN-CS(bb+1),
UNC, UPENN, USC-ECL(7), USC-CSE@USC-ECL(2), USC-ECLD@USC-ECL(1),
SU-AI@USC-ECL(4), USC-ECLA(1), USC-ECLB(2), USC-ECLC(2), USC-ISI(5),
USC-ISIB(bb+6), USC-ISID(1), USC-ISIE(2), USC-ISIF(10), UTAH-20(bb+2),
utcsrgv@CCA-UNIX(1), UTEXAS-20, TI@UTEXAS-20(1), WISC-CRYS(3),
WASHINGTON(4), YALE
-- Ken Laws
------------------------------
Date: Fri, 30 Dec 83 15:20:41 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: Are you interested in a more specialized "VISION-LIST"?
I been feeling frustrated (again). I really like AIList,
since it provides a nice forum for general AI topics. Yet, like
many of you out there, I am primarily a vision researcher looking into
ways to facilitate machine vision and trying to decipher the strange,
all-too-often unknown mechanisms of sight. What we need is a
specialized VISION-LIST to provide a more specific forum that will
foster a greater exchange of ideas among our research.
So...one question and one request: 1) is there such a list in the
works?, and 2) if you are interested in such a list PLEASE SPEAK UP!!
Thanks!
Philip Kahn
UCLA
------------------------------
Date: Fri 30 Dec 83 11:04:17-PST
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: Loop detection
Mike,
It seems to me that we have an inbuilt mechanism which remembers
what is done (thought) at all times. I.E. we know and remember (more or
less) our train of thoughts. When we get in a loop, the mind is
immediately triggered : at the first element, we think it could be a
coincidence, as more elements are found matching the loop, the more
convinced we get that there is a repeat : the reading example is quite
good , even when just one word appears in the same sentence context
(meaning rather than syntactical context), my mind is triggered and I go
back and check if there is actually a loop or not. Thus to implement this
property in the computer we would need a mechanism able to remember the
path and check whether it has been followed already or not (and how
far), at each step. Detection of repeats of logical rather than word for
word sentences (or sets of ideas) is still left open.
I think that the loop detection mechanism is part of the
memorization process, which is an integral part of the reasoning engine
and it is not sitting "on top" and monitoring the reasoning process from
above.
Rene
------------------------------
Date: 2 January 1984 14:40 EST
From: Herb Lin <LIN @ MIT-ML>
Subject: stupid questions....
Speaking as an interested outsider to AI, I have a few questions that
I hope someone can answer in non-jargon. Any help is greatly appreciated:
1. Just why is a language like LISP better for doing AI stuff than a
language like PASCAL or ADA? In what sense is LISP "more natural" for
simulating cognitive processes? Why can't you do this in more tightly
structured languages like PASCAL?
2. What is the significance of not distinguishing between data and
program in LISP? How does this help?
3. What is the difference between decisions made in a production
system (as I understand it, a production is a construct of the form IF
X is true, then do Y, where X is a condition and Y is a procedure),
and decisions made in a PASCAL program (in which IF statements also
have the same (superficial) form).
many thanks.
------------------------------
Date: 1 Jan 84 1:01:50-PST (Sun)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: Re: a trivial reasoning problem? - (nf)
Article-I.D.: fortune.2135
Gee, and to a non-Prolog person (me) your problem seemed so simple
(even given the no-exhaustive-search rule). Let's see,
1. At least one of A or B is on = (A v B)
2. If A is on, B is not = (A -> ~B) = (~A v (~B)) [def'n of ->]
3. A and B are binary conditions.
>From #3, we are allowed to use first-order Boolean algebra (WFF'n'PROOF game).
(That is, #3 is a meta-condition.)
So, #1 and #2 together is just (#1) ↑ (#2) [using caret ↑ for disjunction]
or, #1 ↑ #2 = (A v B) ↑ (~A v ~B)
(distributivity) = (A ↑ ~A) v (A ↑ ~B) v (B ↑ ~A) v (B ↑ ~B)
(from #3 and ↑-axiom) = (A ↑ ~B) v (B ↑ ~A)
(def'n of xor) = A xor B
Hmmm... Maybe I am missing your original question altogether. Is your real
question "How does one enumerate the elements of a state-space (powerset)
for which a certain logical proposition is true without enumerating (examining)
elements of the state-space for which the proposition is false?"?
To me (an ignorant "non-ai" person), this seems excluded by a version of the
First Law of Thermodynamics, namely, the Law of the Excluded Miraculous Sort
(i.e. to tell which of two elements is bigger, you have to look at both).
It seems to me that you must at least look at SOME of the states for which the
proposition is false, or equivalently, you must use the structure of the
formula itself to do the selection (say, while doing a tree-walk). The problem
of the former approach is that the number of "bad" states should be kept
small (for efficiency), leading to all kinds of pruning heuristics; while
for the latter method the problem of elimination of duplicates (assuming
parallel processing) leads to the former method!
In either case, however, reasoning about the variables does not seem to
solve the problem; one must reason about the formulae. If Prolog admits
of constructing such meta-rules, you may have a chance. (I.e., "For all
true formula 'X xor Y', only X need be considered when ~Y, & v-v.)
In any event, I think your problem can be simplified to:
1'. A xor B
2'. A, B are binary variables.
Rob Warnock
UUCP: {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD: (415)595-8444
USPS: Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065
------------------------------
Date: 28 Dec 83 4:01:48-PST (Wed)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: REFERENCES FOR SPECIALIZED CAMERA DE - (nf)
Article-I.D.: fortune.2114
Please clarify what you mean by "get close to the focal point of the
optical system". For any lens system I've used (both cameras and TVs),
the imaging surface (the film or the sensor) already IS at the focal point.
As I recall, the formula (for convex lenses) is:
1 1 1
--- = --- + ---
f obj img
where "f" is the focal length of the lens, "obj" the distance to the "object",
and "img" the distance to the (real) image. Solving for minimum "obj + img",
the closest you can get a focused image to the object (using a lens) is 4*f,
with the lens midway between the object and the image (1/f = 1/2f + 1/2f).
Not sure what a bundle of fibers would do for you, since without a lens each
fiber picks up all the light around it within a cone of its numerical
aperture (NA). Some imaging systems DO use fiber bundles directly in contact
with film, but that's generally going the other way (from a CRT to film).
I think Tektronix has a graphics output device like that. I suppose you
could use it if the object were self-luminous...
Rob Warnock
UUCP: {sri-unix,amd70,hpda,harpo,ihnp4,allegra}!fortune!rpw3
DDD: (415)595-8444
USPS: Fortune Systems Corp, 101 Twin Dolphins Drive, Redwood City, CA 94065
------------------------------
End of AIList Digest
********************
∂04-Jan-84 1020 DKANERVA@SRI-AI.ARPA Newsletter will resume January 12, 1984
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Jan 84 10:20:25 PST
Date: Wed 4 Jan 84 10:20:39-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter will resume January 12, 1984
To: csli-friends@SRI-AI.ARPA
There are no official activities at CSLI this Thursday,
January 5. The next CSLI Newsletter will appear next Thursday,
January 12, when the regular seminars, colloquia, and TINLunch
activities resume. -- Dianne Kanerva
-------
∂04-Jan-84 1139 STAN@SRI-AI.ARPA Foundations Seminar
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Jan 84 11:38:46 PST
Date: 4 Jan 1984 1133-PST
From: Stan at SRI-AI
Subject: Foundations Seminar
To: CSLI-folks:
The Foundations of Situated Language seminar for the winter quarter
will deal with practical reasoning as studied in AI and philosophy.
The goal of the seminar is to develop an understanding of the relation
between traditional issues and problems in philosophy that go by the
name of "practical reasoning" and computational approaches studied in
AI. To reach this goal we will read and closely analyze a small
number of classic papers on the subject.
The seminar will not be a colloquium series, but a working seminar in
which papers are distributed and read in advance. The first meeting
will be held on Jan. 12 in the Ventura Hall seminar room.
Tentative schedule:
Thurs. Jan. 12 Michael Bratman
"A partial overview of some philosophical work
on practical reasoning"
Thurs. Jan. 19 Kurt Konolige
Presentation of "Application of Theorem Proving to
Problem Solving," (C. Green), sections 1-5
Thurs. Jan. 26 John Perry
A philosopher grapples with the above
Later in the seminar we will discuss:
"STRIPS: A New Approach to the Application of Theorem Proving to
Problem Solving," (R. Fikes and N. Nilsson)
"The Frame Problem and Related Problems in Artificial Intelligence,"
(P. Hayes)
A philosophical paper on practical reasoning, to be selected.
-------
∂04-Jan-84 1157 @SU-SCORE.ARPA:TW@SU-AI Santa Cruz
Received: from SU-SCORE by SU-AI with TCP/SMTP; 4 Jan 84 11:57:47 PST
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 4 Jan 84 11:57:41-PST
Date: 04 Jan 84 1156 PST
From: Terry Winograd <TW@SU-AI>
Subject: Santa Cruz
To: faculty@SU-SCORE
An ex-student of mine said that she had heard something about a new
department or expansion of a CS department at Santa Cruz. Does
anyone know about it, or who would be a good person at SC to contact
to find out? Thanks --t
∂04-Jan-84 1817 GOLUB@SU-SCORE.ARPA Yet more on charging for the Dover
Received: from SU-SCORE by SU-AI with TCP/SMTP; 4 Jan 84 18:17:14 PST
Mail-From: BOSACK created at 4-Jan-84 15:27:07
Date: Wed 4 Jan 84 15:27:06-PST
From: Len Bosack <BOSACK@SU-SCORE.ARPA>
Subject: Yet more on charging for the Dover
To: SU-BBoards@SU-SCORE.ARPA
ReSent-date: Wed 4 Jan 84 18:17:21-PST
ReSent-from: Gene Golub <GOLUB@SU-SCORE.ARPA>
ReSent-to: faculty@SU-SCORE.ARPA
For well over a year, we have been telling people that we had to start
charging for the Dover; at least since the Fall, 1982 Town Meeting. At
the Fall, 1983 Town Meeting I thought I announced the start of charging
as soon as technically feasible.
We are charging our best estimate of the average cost per page printed
on the Dover. This is different from common business theory, which would
have us charge the marginal cost and expand facilities to meet demand.
We have no simple way to expand or replace this particular facility.
In the past, CF has not recovered its expenses. As these expenses must
be paid, the department has made good our losses by using unrestricted
funds. This year, department finances will not allow for any losses. The
rates for Sail and Score are set to recover the costs of those systems.
To bring our operations into overall balance, we must charge for the
Dover and other network services.
Len Bosack
-------
∂04-Jan-84 2049 LAWS@SRI-AI.ARPA AIList Digest V2 #2
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Jan 84 20:47:43 PST
Date: Wed 4 Jan 1984 16:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V2 #2
To: AIList@SRI-AI
AIList Digest Thursday, 5 Jan 1984 Volume 2 : Issue 2
Today's Topics:
Hardware - High Resolution Video Projection,
Programming Languages - LISP vs. Pascal,
Net Course - AI and Mysticism
----------------------------------------------------------------------
Date: 04 Jan 84 1553 PST
From: Fred Lakin <FRD@SU-AI>
Subject: High resolution video projection
I want to buy a hi-resolution monochrome video projector suitable for use with
generic LISP machine or Star-type terminals (ie approx 1000 x 1000 pixels).
It would be nice if it cost less than $15K and didn't require expensive
replacement parts (like light valves).
Does anybody know of such currently on the market?
I know, chances seem dim, so on to my second point: I have heard it would be
possible to make a portable video projector that would cost $5K, weigh 25lb,
and project using monochrome green phosphor. The problem is that industry
does not feel the market demand would justify production at such a price ...
Any ideas on how to find out the demand for such an item? Of course if
all of you who might be interested in this kind of projector let me know
your suggestions, that would be a good start.
Thanks in advance for replies and/or notions,
Fred Lakin
------------------------------
Date: Wed 4 Jan 84 10:25:56-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: Re: stupid questions (i.e. Why Lisp?)
You might want to read an article by Beau Sheil (Xerox PARC)
in the February '83 issue of Datamation called "Power tools for
programmers." It is mostly about the Interlisp-D programming
environment, but might give you some insights about LISP in general.
I'll offer three other reasons, though.
Algol family languages lack the datatypes to conveniently
implement a large number of knowledge representation schemes. Ditto
wrt. rules. Try to imagine setting up a pascal record structure to
embody the rules "If I have less than half of a tank of gas then I
have as a goal stopping at a gas station" & "If I am carrying valuable
goods, then I should avoid highway bandits." You could write pascal
CODE that sort of implemented the above, but DATA would be extremely
difficult. You would almost have to write a lisp interpreter in
pascal to deal with it. And then, when you've done that, try writing
a compiler that will take your pascal data structures and generate
native code for the machine in question! Now, do it on the fly, as a
knowledge engineer is augmenting the knowledge base!
Algol languages have a tedious development cycle because they
typically do not let a user load/link the same module many times as he
debugs it. He typically has to relink the entire system after every
edit. This prevents much in the way of incremental compilation, and
makes such languages tedious to debug in. This is an argument against
the languages in general, and doesn't apply to AI explicitly. The AI
community feels this as a pressure more, though, perhaps because it
tends to build such large systems.
Furthermore, consider that most bugs in non-AI systems show up
at compile time. If a flaw is in the KNOWLEDGE itself in an AI
system, however, the flaws will only show up in the form of incorrect
(unintelligent?) behavior. Typically only lisp-like languages provide
the run-time tools to diagnose such problems. In Pascal, etc, the
programmer would have to go back and explicitly put all sorts of
debugging hooks into the system, which is both time consuming, and is
not very clean. --Christopher
------------------------------
Date: 4 Jan 84 13:59:07 EST
From: STEINBERG@RUTGERS.ARPA
Subject: Re: Herb Lin's questons on LISP etc.
Herb:
Those are hardly stupid questions. Let me try to answer:
1. Just why is a language like LISP better for doing AI stuff than a
language like PASCAL or ADA?
There are two kinds of reasons. You could argue that LISP is more
oriented towards "symbolic" processing than PASCAL. However, probably
more important is the fact that LISP provides a truly outstanding
environment for exploratory programming, that is, programming where
you do not completely understand the problem or its solutions before
you start programming. This is normally the case in AI programming -
even if you think you understand things you normally find out there
was at least something you were wrong about or had forgotten. That's
one major reason for actually writing the programs.
Note that I refer to the LISP environment, not just the language. The
existence of good editors, debuggers, cross reference aids, etc. is at
least as important as the language itself. A number of features of LISP
make a good environment easy to provide for LISP. These include the
compatible interpreter/compiler, the centrality of function calls, and the
simplicity and accessibility of the internal representation of programs.
For a very good introduction to the flavor of programming in LISP
environments, see "Programming in an Interactive Environment, the LISP
Experience", by Erik Sandewall, Computing Surveys, V. 10 #1, March 1978.
2. What is the significance of not distinguishing between data
and program in LISP? How does this help?
Actually, in ANY language, the program is also data for the interpreter
or compiler. What is important about LISP is that the internal form used
by the interpreter is simple and accessible. It is simple in that the
the internal form is a structure of nested lists that captures most of
both the syntactic and the semantic structure of the code. It is accessible
in that this structure of nested lists is in fact a basic built in data
structure supported by all the facilities of the system, and in that a
program can access or set the definition of a function.
Together these make it easy to write programs which operate on other programs.
E.g. to add a trace feature to PASCAL you have to modify the compiler or
interpreter. To add a trace feature to LISP you need not modify the
interpreter at all.
Furthermore, it turns out to be easy to use LISP to write interpreters
for other languages, as long as the other languages use a similar
internal form and have a similarly simple relation between form and
semantics. Thus, a common way to solve a problem in LISP is to
implement a language in which it is easy to express solutions to
problems in a general class, and then use this language to solve your
particular problem. See the Sandewall article mentioned above.
3. What is the difference between decisions made in a production
system and decisions made in a PASCAL program (in which IF statements
also have the same (superficial) form).
Production Systems gain some advantages by restricting the languages
for the IF and THEN parts. Also, in many production systems, all
the IF parts are evaluated first, to see which are true, before any
THEN part is done. If more than one IF part is true, some other
mechanism decides which THEN part (or parts) to do. Finally, some
production systems such as EMYCIN do "backward chaining", that is, one
starts with a goal and asks which THEN parts, if they were done, would
be useful in achieving the goal. One then looks to see if their
corresponding IF parts are true, or can be made true by treating them
as sub-goals and doing the same kind of reasoning on them.
A very good introduction to production systems is "An Overview of Production
Systems" by Randy Davis and Jonathan King, October 1975, Stanford AI Lab
Memo AIM-271 and Stanford CS Dept. Report STAN-CS-75-524. It's probably
available from the National Technical Information Service.
------------------------------
Date: 1 Jan 84 8:42:34-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Netwide Course -- AI and Mysticism!!
Article-I.D.: psuvax.395
*************************************************************************
* *
* An Experiment in Teaching, an Experiment in AI *
* Spring Term Artificial Intelligence Seminar Announcement *
* *
*************************************************************************
This Spring term Penn State inaugurates a new experimental course:
"THE HUMAN CONDITION: PROBLEMS AND CREATIVE SOLUTIONS".
This course explores all that makes the human condition so joyous and
delightful: learning, creative expression, art, music, inspiration,
consciousness, awareness, insight, sensation, planning, action, community.
Where others study these DESCRIPTIVELY, we will do so CONSTRUCTIVELY. We
will gain familiarity by direct human experience and by building artificial
entities which manifest these wonders!!
We will formulate and study models of the human condition -- an organism of
bounded rationality confronting a bewilderingly complex environment. The
human organism must fend for survival, but it is aided by some marvelous
mechanisms: perception (vision, hearing), cognition (understanding, learning,
language), and expression (motor skill, music, art). We can view these
respectively as the input, processing, and output of symbolic information.
These mechanisms somehow encode all that is uniquely human in our experience
-- or do they?? Are these mechanisms universal among ALL sentient beings, be
they built from doped silicon or neural jelly? Are these mechanisms really
NECESSARY and SUFFICIENT for sentience?
Not content with armchair philosophizing, we will push these models toward
the concreteness needed for physical implementation. We will build the tools
that will help us to understand and use the necessary representations and
processes, and we will use these tools to explore the space of possible
realizations of "artificial sentience".
This will be no ordinary course. For one thing, it has no teacher. The
course will consist of a group of highly energetic individuals engaged in
seeking the secrets of life, motivated solely by the joy of the search
itself. I will function as a "resource person" to the extent my background
allows, but the real responsibility for the success of the expedition rests
upon ALL of its members.
My role is that of "encounter group facilitator": I jab when things lag.
I provide a sheltered environment where the shy can "come out" without
fear. I manipulate and connive to keep the discussions going at a fever
pitch. I pick and poke, question and debunk, defend and propose, all to
incite people to THINK and to EXPRESS.
Several people who can't be at Penn State this Spring told me they wish
they could participate -- so: I propose opening this course to the entire
world, via the miracles of modern networks! We have arranged a local
mailing list for sharing discussions, source-code, class-session summaries,
and general flammage (with the chaff surely will be SOME wheat). I'm aware
of three fora for sharing this: USENET's net.ai, Ken Laws' AIList, and
MIT's SELF-ORG mailing list. PLEASE MAIL ME YOUR REACTIONS to using these
resources: would YOU like to participate? would it be a productive use of
the phone lines? would it be more appropriate to go to /dev/null?
The goals of this course are deliberately ambitious. I seek participants
who are DRIVEN to partake in this journey -- the best, brightest, most
imaginative and highly motivated people the world has to offer.
Course starts Monday, January 16. If response is positive, I'll post the
network arrangements about that time.
This course is dedicated to the proposition that the best way to secure
for ourselves the blessings of life, liberty, and the pursuit of happiness
is reverence for all that makes the human condition beautiful, and the
best way to build that reverence is the scientific study and construction
of the marvels that make us truly human.
--
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa: bobgian%psuvax1.bitnet@Berkeley Bitnet: bobgian@PSUVAX1.BITNET
CSnet: bobgian@penn-state.csnet UUCP: allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802
------------------------------
Date: 1 Jan 84 8:46:31-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Netwide AI Course -- Part 2
Article-I.D.: psuvax.396
*************************************************************************
* *
* Spring Term Artificial Intelligence Seminar Syllabus *
* *
*************************************************************************
MODELS OF SENTIENCE
Learning, Cognitive Model Formation, Insight, Discovery, Expression;
"Subcognition as Computation", "Cognition as Subcomputation";
Physical, Cultural, and Intellectual Evolution.
SYMBOLIC INPUT CHANNELS: PERCEPTION
Vision, hearing, signal processing, the "signal/symbol interface".
SYMBOLIC PROCESSING: COGNITION
Language, Understanding, Goals, Knowledge, Reasoning.
SYMBOLIC OUTPUT CHANNELS: EXPRESSION
Motor skills, Artistic and Musical Creativity, Story Creation,
Prose, Poetry, Persuasion, Beauty.
CONSEQUENCES OF THESE MODELS
Physical Symbol Systems and Godel's Incompleteness Theorems;
The "Aha!!!" Phenomenon, Divine Inspiration, Extra-Sensory Perception,
The Conscious/Unconscious Mind, The "Right-Brain/Left-Brain" Dichotomy;
"Who Am I?", "On Having No Head"; The Nature and Texture of Reality;
The Nature and Role of Humor; The Direct Experience of the Mystical.
TECHNIQUES FOR DEVELOPING THESE ABILITIES IN HUMANS
Meditation, Musical and Artistic Experience, Problem Solving,
Games, Yoga, Zen, Haiku, Koans, "Calculus for Peak Experiences".
TECHNIQUES FOR DEVELOPING THESE ABILITIES IN MACHINES
REVIEW OF LISP PROGRAMMING AND FORMAL SYMBOL MANIPULATION:
Construction and access of symbolic expressions, Evaluation and
Quotation, Predicates, Function definition; Functional arguments
and returned values; Binding strategies -- Local versus Global,
Dynamic versus Lexical, Shallow versus Deep; Compilation of LISP.
IMPLEMENTATION OF LISP: Storage Mapping and the Free List;
The representation of Data: Typed Pointers, Dynamic Allocation;
Symbols and the Symbol Table (Obarray); Garbage Collection
(Sequential and Concurrent algorithms).
REPRESENTATION OF PROCEDURE: Meta-circular definition of the
evaluation process.
"VALUES" AND THE OBJECT-ORIENTED VIEW OF PROGRAMMING: Data-Driven
Programming, Message-Passing, Information Hiding; the MIT Lisp Machine
"Flavor" system; Functional and Object-Oriented systems -- comparison
with SMALLTALK.
SPECIALIZED AI PROGRAMMING TECHNIQUES: Frames and other Knowledge
Representation Languages, Discrimination Nets, Augmented Transition
Networks; Pattern-Directed Inference Systems, Agendas, Chronological
Backtracking, Dependency-Directed Backtracking, Data Dependencies,
Non-Monotonic Logic, and Truth-Maintenance Systems.
LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS:
Frames and other Knowledge Representation Languages, Discrimination
Nets, "Higher" High-Level Languages: PLANNER, CONNIVER, PROLOG.
SCIENTIFIC AND ETHICAL CONSEQUENCES OF THESE ABILITIES IN HUMANS
AND IN MACHINES
The Search for Extra-Terrestrial Intelligence.
(Would we recognize it if we found it? Would they recognize us?)
The Search for Terrestrial Intelligence.
Are We Unique? Are we worth saving? Can we save ourselves?
Why are we here? Why is ANYTHING here? WHAT is here?
Where ARE we? ARE we? Is ANYTHING?
These topics form a cluster of related ideas which we will pursue more-or-
less concurrently; the listing is not meant to imply a particular sequence.
Various course members have expressed interest in the following software
engineering projects. These (and possibly others yet to be suggested)
will run concurrently throughout the course:
LISP Implementations:
For CMS, in PL/I and/or FORTRAN
In PASCAL, optimized for personal computers (esp HP 9816)
In Assembly, optimized for Z80 and MC68000
In 370 BAL, modifications of LISP 1.5
New "High-Level" Systems Languages:
Flavor System (based on the MIT Zetalisp system)
Prolog Interpreter (plus compiler?)
Full Programming Environment (Enhancements to LISP):
Compiler, Editor, Workspace Manager, File System, Debug Tools
Architectures and Languages for Parallel {Sub-}Cognition:
Software and Hardware Alternatives to the Von-Neuman Computer
Concurrent Processing and Message Passing systems
Machine Learning and Discovery Systems:
Representation Language for Machine Learning
Strategy Learning for various Games (GO, CHECKERS, CHESS, BACKGAMMON)
Perception and Motor Control Systems:
Vision (implementations of David Marr's theories)
Robotic Welder control system
Creativity Systems:
Poetry Generators (Haiku)
Short-Story Generators
Expert Systems (traditional topic, but including novel features):
Euclidean Plane Geometry Teaching and Theorem-Proving system
Welding Advisor
Meteorological Analysis Teaching system
READINGS -- the following books will be very helpful:
1. ARTIFICIAL INTELLIGENCE, Patrick H. Winston; Addison Wesley, 1984.
2. THE HANDBOOK OF ARTIFICIAL INTELLIGENCE, Avron Barr, Paul Cohen, and
Edward Feigenbaum; William Kaufman Press, 1981 and 1982. Vols 1, 2, 3.
3. MACHINE LEARNING, Michalski, Carbonell, and Mitchell; Tioga, 1983.
4. GODEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Douglas R. Hofstadter;
Basic Books, 1979.
5. THE MIND'S I, Douglas R. Hofstadter and Daniel C. Dennett;
Basic Books, 1981.
6. LISP, Patrick Winston and Berthold K. P. Horn; Addison Wesley, 1981.
7. ANATOMY OF LISP, John Allen; McGraw-Hill, 1978.
8. ARTIFICIAL INTELLIGENCE PROGRAMMING, Eugene Charniak, Christopher K.
Riesbeck, and Drew V. McDermott; Lawrence Erlbaum Associates, 1980.
--
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
Arpa: bobgian%psuvax1.bitnet@Berkeley Bitnet: bobgian@PSUVAX1.BITNET
CSnet: bobgian@penn-state.csnet UUCP: allegra!psuvax!bobgian
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802
------------------------------
End of AIList Digest
********************
∂05-Jan-84 0940 DFH SPECIAL SEMINAR
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Herbert Stoyan, Universitat Erlangen-Nurnberg, Erlangen, Germany
TITLE: Programming styles in AI
TIME: Tuesday, Jan. 10, 1:15-2:30 PM
PLACE: 252 Margaret Jacks, Stanford
Abstract:
There seems to be not much clarity about genuine AI methods. Scientific
methods are sets of rules used to collect more knowledge about the subject
of research. AI as an experimental branch of computer science seems not to
have established programming methods.
In some famous work in AI we can find the following method:
1.develop a new convenient programming style,
2.invent a new programming language which supports the new style (or embed
some appropriate elements into an existing AI-language,
LISP for example)
3.implement the language (interpretation as first step is typically less
efficient as compilation)
4.use the new style in programming to get the things easier.
A programming style is a way of programming guided by a speculative view
of a machine which works according to the programs. A programming style
is not a programming method. It may be detected by analyzing the text of a
completed program. In general it is possible to program in one programming
language according to the principles of various styles. This is true in
spite of the fact that programming languages are usually designed with some
machion model (and therefore with some programming style) in mind.
We discuss some of the AI programming styles (operator-oriented p.s.,
logic-oriented p.s., function-oriented p.s., rule-oriented p.s.,
goal-oriented p.s., event-oriented p.s., state-oriented p.s.,
constraint-oriented p.s., object-oriented p.s. not to mention the common
instruction-oriented p.s.) and give a more detailed discussion of how
the object-oriented p.s. may be obeyed in conventional programming languages.
∂05-Jan-84 0951 @SRI-AI.ARPA:DFH@SU-AI SPECIAL SEMINAR
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Jan 84 09:50:52 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Thu 5 Jan 84 09:48:50-PST
Date: 05 Jan 84 0940 PST
From: Diana Hall <DFH@SU-AI>
Subject: SPECIAL SEMINAR
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Herbert Stoyan, Universitat Erlangen-Nurnberg, Erlangen, Germany
TITLE: Programming styles in AI
TIME: Tuesday, Jan. 10, 1:15-2:30 PM
PLACE: 252 Margaret Jacks, Stanford
Abstract:
There seems to be not much clarity about genuine AI methods. Scientific
methods are sets of rules used to collect more knowledge about the subject
of research. AI as an experimental branch of computer science seems not to
have established programming methods.
In some famous work in AI we can find the following method:
1.develop a new convenient programming style,
2.invent a new programming language which supports the new style (or embed
some appropriate elements into an existing AI-language,
LISP for example)
3.implement the language (interpretation as first step is typically less
efficient as compilation)
4.use the new style in programming to get the things easier.
A programming style is a way of programming guided by a speculative view
of a machine which works according to the programs. A programming style
is not a programming method. It may be detected by analyzing the text of a
completed program. In general it is possible to program in one programming
language according to the principles of various styles. This is true in
spite of the fact that programming languages are usually designed with some
machion model (and therefore with some programming style) in mind.
We discuss some of the AI programming styles (operator-oriented p.s.,
logic-oriented p.s., function-oriented p.s., rule-oriented p.s.,
goal-oriented p.s., event-oriented p.s., state-oriented p.s.,
constraint-oriented p.s., object-oriented p.s. not to mention the common
instruction-oriented p.s.) and give a more detailed discussion of how
the object-oriented p.s. may be obeyed in conventional programming languages.
∂05-Jan-84 1218 GOLUB@SU-SCORE.ARPA Faculty meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 5 Jan 84 12:18:03 PST
Date: Thu 5 Jan 84 12:17:25-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Faculty meeting
To: faculty@SU-SCORE.ARPA
Again, the Faculty meeting will take place on Tuesday, Jan 10 at
2:30. Any agenda items? GENE
-------
∂05-Jan-84 1502 LAWS@SRI-AI.ARPA AIList Digest V2 #3
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Jan 84 14:59:11 PST
Date: Wed 4 Jan 1984 17:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V2 #3
To: AIList@SRI-AI
AIList Digest Thursday, 5 Jan 1984 Volume 2 : Issue 3
Today's Topics:
Course - Penn State's First Undergrad AI Course
----------------------------------------------------------------------
Date: 31 Dec 83 15:18:20-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: Penn State's First Undergrad AI Course
Article-I.D.: psuvax.380
Last fall I taught Penn State's first ever undergrad AI course. It
attracted 150 students, including about 20 faculty auditors. I've gotten
requests from several people initiating AI courses elsewhere, and I'm
posting this and the next 6 items in hopes they may help others.
1. General Information
2. Syllabus (slightly more detailed topic outline)
3. First exam
4. Second exam
5. Third exam
6. Overview of how it went.
I'll be giving this course again, and I hate to do anything exactly the
same twice. I welcome comments and suggestions from all net buddies!
-- Bob
[Due to the length of Bob's submission, I will send the three
exams as a separate digest. Bob's proposal for a network AI course
associated with his spring semester curriculum was published in
the previous AIList issue; that was entirely separate from the
following material. -- Ken Laws]
--
Spoken: Bob Giansiracusa
Bell: 814-865-9507
Bitnet: bobgian@PSUVAX1.BITNET
Arpa: bobgian%psuvax1.bitnet@Berkeley
CSnet: bobgian@penn-state.csnet
UUCP: allegra!psuvax!bobgian
USnail: Dept of Comp Sci, Penn State Univ, University Park, PA 16802
------------------------------
Date: 31 Dec 83 15:19:52-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course, Part 1/6
Article-I.D.: psuvax.381
CMPSC 481: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
An introduction to the theory, research paradigms, implementation techniques,
and philosopies of Artificial Intelligence considered both as a science of
natural intelligence and as the engineering of mechanical intelligence.
OBJECTIVES -- To provide:
1. An understanding of the principles of Artificial Intelligence;
2. An appreciation for the power and complexity of Natural Intelligence;
3. A viewpoint on programming different from and complementary to the
viewpoints engendered by other languages in common use;
4. The motivation and tools for developing good programming style;
5. An appreciation for the power of abstraction at all levels of program
design, especially via embedded compilers and interpreters;
6. A sense of the excitement at the forefront of AI research; and
7. An appreciation for the tremendous impact the field has had and will
continue to have on our perception of our place in the Universe.
TOPIC SUMMARY:
INTRODUCTION: What is "Intelligence"?
Computer modeling of "intelligent" human performance. The Turing Test.
Brief history of AI. Relation of AI to psychology, computer science,
management, engineering, mathematics.
PRELUDE AND FUGUE ON THE "SECRET OF INTELLIGENCE":
"What is a Brain that it may possess Intelligence, and Intelligence that
it may inhabit a Brain?" Introduction to Formal Systems, Physical Symbol
Systems, and Multilevel Interpreters. Necessity and Sufficiency of
Physical Symbol Systems as the basis for intelligence.
REPRESENTATION OF PROBLEMS, GOALS, ACTIONS, AND KNOWLEDGE:
State Space, Predicate Calculus, Production Systems, Procedural
Representations, Semantic Networks, Frames and Scripts.
THE "PROBLEM-SOLVING" PARADIGM AND TECHNIQUES:
Generate and Test, Heuristic Search (Search WITH Heuristics,
Search FOR Heuristics), Game Trees, Minimax, Problem Decomposition,
Means-Ends Analysis, The General Problem Solver (GPS).
LISP PROGRAMMING:
Symbolic Expressions and Symbol Manipulation, Data Structures,
Evaluation and Quotation, Predicates, Input/Output, Recursion.
Declarative and Procedural knowledge representation in LISP.
LISP DETAILS:
Storage Mapping, the Free List, and Garbage Collection,
Binding strategies and the concept of the "Environment", Data-Driven
Programming, Message-Passing, The MIT Lisp Machine "Flavor" system.
LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS:
Frames and other Knowledge Representation Languages, Discrimination
Nets, "Higher" High-Level Languages: PLANNER, CONNIVER, PROLOG.
LOGIC, RULE-BASED SYSTEMS, AND INFERENCE:
Logic: Axioms, Rules of Inference, Theorems, Truth, Provability.
Production Systems: Rule Interpreters, Forward/Backward Chaining.
Expert Systems: Applied Knowledge Representation and Inference.
Data Dependencies, Non-Monotonic Logic, and Truth-Maintenance Systems,
Theorem Proving, Question Answering, and Planning systems.
THE UNDERSTANDING OF NATURAL LANGUAGE:
Formal Linguistics: Grammars and Machines, the Chomsky Hierarchy.
Syntactic Representation: Augmented Transition Networks (ATNs).
Semantic Representation: Conceptual Dependency, Story Understanding.
Spoken Language Understanding.
ROBOTICS: Machine Vision, Manipulator and Locomotion Control.
MACHINE LEARNING:
The Spectrum of Learning: Learning by Adaptation, Learning by Being
Told, Learning from Examples, Learning by Analogy, Learning by
Experimentation, Learning by Observation and Discovery.
Model Induction via Generate-and-Test, Automatic Theory Formation.
A Model for Intellectual Evolution.
RECAPITULATION AND CODA:
The knowledge representation and problem-solving paradigms of AI.
The key ideas and viewpoints in the modeling and creation of intelligence.
Is there more (or less) to Intelligence, Consciousness, the Soul?
Prospectus for the future.
Handouts for the course include:
1. Computer Science as Empirical Inquiry: Symbols and Search. 1975 Turing
Award Lecture by Allen Newell and Herb Simon; Communications of the ACM,
Vol. 19, No. 3, March 1976.
2. Steps Toward Artificial Intelligence. Marvin Minsky; Proceedings of the
IRE, Jan. 1961.
3. Computing Machinery and Intelligence. Alan Turing; Mind (Turing's
original proposal for the "Turing Test").
4. Exploring the Labyrinth of the Mind. James Gleick; New York Times
Magazine, August 21, 1983 (article about Doug Hofstadter's recent work).
TEXTBOOKS:
1. ARTIFICIAL INTELLIGENCE, Patrick H. Winston; Addison Wesley, 1983.
Will be available from publisher in early 1984. I will distribute a
copy printed from Patrick's computer-typeset manuscript.
2. LISP, Patrick Winston and Berthold K. P. Horn; Addison Wesley, 1981.
Excellent introductory programming text, illustrating many AI implementation
techniques at a level accessible to novice programmers.
4. GODEL, ESCHER, BACH: AN ETERNAL GOLDEN BRAID, Douglas R. Hofstadter;
Basic Books, 1979. One of the most entertaining books on the subject of AI,
formal systems, and symbolic modeling of intelligence.
5. THE HANDBOOK OF ARTIFICIAL INTELLIGENCE, Avron Barr, Paul Cohen, and
Edward Feigenbaum; William Kaufman Press, 1981 and 1982. Comes as a three
volume set. Excellent (the best available), but the full set costs over $100.
6. ANATOMY OF LISP, John Allen; McGraw-Hill, 1978. Excellent text on the
definition and implementation of LISP, sufficient to enable one to write a
complete LISP interpreter.
------------------------------
Date: 31 Dec 83 15:21:46-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 2/6 (Topic Outline)
Article-I.D.: psuvax.382
CMPSC 481: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
TOPIC OUTLINE:
INTRODUCTION: What is "Intelligence"?
Computer modeling of "intelligent" human performance. Turing Test.
Brief history of AI. Examples of "intelligent" programs: Evan's Geometric
Analogies, the Logic Theorist, General Problem Solver, Winograd's English
language conversing blocks world program (SHRDLU), MACSYMA, MYCIN, DENDRAL.
PRELUDE AND FUGUE ON THE "SECRET OF INTELLIGENCE":
"What is a Brain that it may possess Intelligence, and Intelligence that
it may inhabit a Brain?" Introduction to Formal Systems, Physical Symbol
Systems, and Multilevel Interpreters.
REPRESENTATION OF PROBLEMS, GOALS, ACTIONS, AND KNOWLEDGE:
State Space problem formulations. Predicate Calculus. Semantic Networks.
Production Systems. Frames and Scripts.
SEARCH:
Representation of problem-solving as graph search.
"Blind" graph search:
Depth-first, Breadth-first.
Heuristic graph search:
Best-first, Branch and Bound, Hill-Climbing.
Representation of game-playing as tree search:
Static Evaluation, Minimax, Alpha-Beta.
Heuristic Search as a General Paradigm:
Search WITH Heuristics, Search FOR Heuristics
THE GENERAL PROBLEM SOLVER (GPS) AS A MODEL OF INTELLIGENCE:
Goals and Subgoals -- problem decomposition
Difference-Operator Tables -- the solution to subproblems
Does the model fit? Does GPS work?
EXPERT SYSTEMS AND KNOWLEDGE ENGINEERING:
Representation of Knowledge: The "Production System" Movement
The components:
Knowledge Base
Inference Engine
Examples of famous systems:
MYCIN, TEIRESIAS, DENDRAL, MACSYMA, PROSPECTOR
INTRODUCTION TO LISP PROGRAMMING:
Symbolic expressions and symbol manipulation:
Basic data types
Symbols
The special symbols T and NIL
Numbers
Functions
Assignment of Values to Symbols (SETQ)
Objects constructed from basic types
Constructor functions: CONS, LIST, and APPEND
Accessor functions: CAR, CDR
Evaluation and Quotation
Predicates
Definition of Functions (DEFUN)
Flow of Control (COND, PROG, DO)
Input and Output (READ, PRINT, TYI, TYO, and friends)
REPRESENTATION OF DECLARATIVE KNOWLEDGE IN LISP:
Built-in representation mechanisms
Property lists
Arrays
User-definable data structures
Data-structure generating macros (DEFSTRUCT)
Manipulation of List Structure
"Pure" operations (CONS, LIST, APPEND, REVERSE)
"Impure" operations (RPLACA and RPLACD, NCONC, NREVERSE)
Storage Mapping, the Free List, and Garbage Collection
REPRESENTATION OF PROCEDURAL KNOWLEDGE IN LISP:
Types of Functions
Expr: Call by Value
Fexpr: Call by Name
Macros and macro-expansion
Functions as Values
APPLY, FUNCALL, LAMBDA expressions
Mapping operators (MAPCAR and friends)
Functional Arguments (FUNARGS)
Functional Returned Values (FUNVALS)
THE MEANING OF "VALUE":
Assignment of values to symbols
Binding of values to symbols
"Local" vs "Global" variables
"Dynamic" vs "Lexical" binding
"Shallow" vs "Deep" binding
The concept of the "Environment"
"VALUES" AND THE OBJECT-CENTERED VIEW OF PROGRAMMING:
Data-Driven programming
Message-passing
Information Hiding
Safety through Modularity
The MIT Lisp Machine "Flavor" system
LISP'S TALENTS IN REPRESENTATION AND SEARCH:
Representation of symbolic structures in LISP
Predicate Calculus
Rule-Based Expert Systems (the Knowledge Base examined)
Frames
Search Strategies in LISP
Breadth-first, Depth-first, Best-first search
Tree search and the simplicity of recursion
Interpretation of symbolic structures in LISP
Rule-Based Expert Systems (the Inference Engine examined)
Symbolic Mathematical Manipulation
Differentiation and Integration
Symbolic Pattern Matching
The DOCTOR program (ELIZA)
LISP AS THE "SYSTEMS SUBSTRATE" FOR HIGHER LEVEL ABSTRACTIONS
Frames and other Knowledge Representation Languages
Discrimination Nets
Augmented Transition Networks (ATNs) as a specification of English syntax
Interpretation of ATNs
Compilation of ATNs
Alternative Control Structures
Pattern-Directed Inference Systems (production system interpreters)
Agendas (best-first search)
Chronological Backtracking (depth-first search)
Dependency-Directed Backtracking
Data Dependencies, Non-Monotonic Logic, and Truth-Maintenance Systems
"Higher" High-Level Languages: PLANNER, CONNIVER
PROBLEM SOLVING AND PLANNING:
Hierarchical models of planning
GPS, STRIPS, ABSTRIPS
Non-Hierarchical models of planning
NOAH, MOLGEN
THE UNDERSTANDING OF NATURAL LANGUAGE:
The History of "Machine Translation" -- a seemingly simple task
The Failure of "Machine Translation" -- the need for deeper understanding
The Syntactic Approach
Grammars and Machines -- the Chomsky Hierarchy
RTNs, ATNs, and the work of Terry Winograd
The Semantic Approach
Conceptual Dependency and the work of Roger Schank
Spoken Language Understanding
HEARSAY
HARPY
ROBOTICS:
Machine Vision
Early visual processing (a signal processing approach)
Scene Analysis and Image Understanding (a symbolic processing approach)
Manipulator and Locomotion Control
Statics, Dynamics, and Control issues
Symbolic planning of movements
MACHINE LEARNING:
Rote Learning and Learning by Adaptation
Samuel's Checker player
Learning from Examples
Winston's ARCH system
Mitchell's Version Space approach
Learning by Planning and Experimentation
Samuel's program revisited
Sussman's HACKER
Mitchell's LEX
Learning by Heuristically Guided Discovery
Lenat's AM (Automated Mathematician)
Extending the Heuristics: EURISKO
Model Induction via Generate-and-Test
The META-DENDRAL project
Automatic Formation of Scientific Theories
Langley's BACON project
A Model for Intellectual Evolution (my own work)
RECAP ON THE PRELUDE AND FUGUE:
Formal Systems, Physical Symbol Systems, and Multilevel Interpreters
revisited -- are they NECESSARY? are they SUFFICIENT? Is there more
(or less) to Intelligence, Consciousness, the Soul?
SUMMARY, CONCLUSIONS, AND FORECASTS:
The representation of knowledge in Artificial Intelligence
The problem-solving paradigms of Artificial Intelligence
The key ideas and viewpoints in the modeling and creation of intelligence
The results to date of the noble effort
Prospectus for the future
------------------------------
Date: 31 Dec 83 15:28:32-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 6/6 (Overview)
Article-I.D.: psuvax.386
A couple of notes about how the course went. Interest was high, but the
main problem I found is that Penn State students are VERY strongly
conditioned to work for grades and little else. Most teachers bore them,
expect them to memorize lectures and regurgitate on exams, and students
then get drunk (over 50 frats here) and promptly forget all. Initially
I tried to teach, but I soon realized that PEOPLE CAN LEARN (if they
really want to) BUT NOBODY CAN TEACH (students who don't want to learn).
As the course evolved my role became less "information courier" and more
"imagination provoker". I designed exams NOT to measure learning but to
provoke thinking (and thereby learning). The first exam (on semantic
nets) was given just BEFORE covering that topic in lecture -- students
had a hell of a hard time on the exam, but they sure sat up and paid
attention to the next week's lectures!
For the second exam I announced that TWO exams were being given: an easy
one (if they sat on one side of the room) and a hard one (on other side).
Actually the exams were identical. (This explains the first question.)
The winning question submitted from the audience related to the chapter
in GODEL, ESCHER, BACH on the MU system: I gave a few axioms and inference
rules and then asked whether a given wff was a theorem.
The third exam was intended ENTIRELY to provoke discussion and NOT AT ALL
to measure anything. It started with deadly seriousness, then (about 20
minutes into the exam) a few "audience plants" started acting out a
prearranged script which included discussing some of the questions and
writing some answers on the blackboard. The attempt was to puncture the
"exam mentality" and generate some hot-blooded debate (you'll see what I
mean when you see the questions). Even the Teaching Assistants were kept
in the dark about this "script"! Overall, the attempt failed, but many
people did at least tell me that taking the exams was the most fun part
of the course!
With this lead-in, you probably have a clearer picture of some of the
motivations behind the spring term course. To put it bluntly: I CANNOT
TEACH AI. I CAN ONLY HOPE TO INSPIRE INTERESTED STUDENTS TO WANT TO LEARN
AI. I'LL DO ANYTHING I CAN THINK OF WHICH INCREASES THAT INSPIRATION.
The motivational factors also explain my somewhat unusual grading system.
I graded on creativity, imagination, inspiration, desire, energy, enthusiasm,
and gusto. These were partly measured by the exams, partly by the energy
expended on several optional projects (and term paper topics), and partly
by my seat-of-the-pants estimate of how determined a student was to DO real
AI. This school prefers strict objective measures of student performance.
Tough.
This may all be of absolutely no relevance to others teaching AI. Maybe
I'm just weird. I try to cultivate that image, for it seems to attract
the best and brightest students!
-- Bob Giansiracusa
------------------------------
End of AIList Digest
********************
∂05-Jan-84 1629 SCHMIDT@SUMEX-AIM.ARPA 3600 inventory grows
Received: from SUMEX-AIM by SU-AI with TCP/SMTP; 5 Jan 84 16:29:34 PST
Date: Thu 5 Jan 84 16:31:59-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: 3600 inventory grows
To: HPP-Lisp-Machines@SUMEX-AIM.ARPA
This is to inform the curious of a sixth LM-3600 at Stanford, and
welcome a new member to the LM-using community.
The statistics department (in Sequoia Hall) is the proud owner of
a bouncing new 3600. They plan to run it standalone until they find some
solution to the problem of networking it with their Iris (which speaks NS)
and their VAX (which at present also speaks NS). The contacts there are
Mark Matthews (Matthews@Score) and John McDonald (JAM@Score).
My my reckoning, Stanford hosts
1. Mt. St. Coax (Formal Reasoning)
2. Iguana (Formal Reasoning)
3. HPP-3600-1 (Heuristic Programming Project)
4. HPP-3600-2 (Heuristic Programming Project)
5. (Robotics)
6. (Statistics)
--Christopher
-------
∂05-Jan-84 1824 ALMOG@SRI-AI.ARPA Seminar on why DISCOURSE wont go away
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Jan 84 18:24:35 PST
Date: 5 Jan 1984 1821-PST
From: Almog at SRI-AI
Subject: Seminar on why DISCOURSE wont go away
To: csli-friends at SRI-AI
This is to remind you of the continuation of our seminar. The
focus this term is on DISCOURSE. The first speaker will be B.Grosz who
has been working on discourse phenomena for a long long time. She
will give some perspectives on research on discourse + focus us on some
particular pieces of discourse that we shall be analyzing this term.
Here is a statement of will and purpose of the seminar as a whole:
WHY DISCOURSE WON'T GO AWAY
Last term we asked (rhetorically): Why CONTEXT won't go away?
This term, we ask (again rhetorically): Why DISCOURSE won't go away?
There are two naive motivations for presupposing that the
question has a real bite. The first is very general; it is a truism
but perhaps an important one: natural language comes in chunks of
sentences. These chunks are produced and understood quite easily by
people. They are meaningful as a unit. People seem to have intuitions
about their internal coherence. They seem to enjoy relations (follow
from, be paraphrases of, summaries of, relevant to, etc.) with each
other--chunks of discourse are related much like constituents of
sentences are related (though the basis for these discourse relations
may be different).
Furthermore, they (please note the perfectly understandable
INTER-discourse pronoun) are the backbone of multiperson communication
(i.e. dialogues, question- answering interactions, etc.). As such, new
types of adequacy conditions seem to grow on them and again we all
seem to abide (most of the time) by those conditions.
Finally, there is the general methodological feeling, again
very naive but sensible, that is analogous to the case of physical
theories: we have theories for sub-atomic particles, atoms, and
molecules (forget all the intermediate possibilities). Would it be
imaginable to focus just on sub-atomic particles or atoms? Surely not.
Actual history teaches us that molecular theories have been the focus
BEFORE sub-atomic theories. The fact that (formally-oriented)
semantics has been done for a long time in a purely a-priori way,
mimicing the model theory of logical languages, may explain the
opposite direction that we encounter in most language research. So, if
you try to be very naive and just LOOK at the natural phenomenon, it's
there, like Salt or Carbon Dioxide.
Now, all this sounds terribly naive. We usually couple it with
the second type of justification for an enterprise: there are actual
linguistic phenomena that seem to go beyond the sentential level. To
drop some names, many scholars seem to consider phenomena like
anaphora, temporal reference, intentional contexts(with a "t"),
definite descriptions, presuppositions, as being inextricably linked
to the discourse level. The connection between our two questions then
becomes more clear: as a discourse unfolds context changes. We cannot
account for the effects of context on interpretation without
considering its dynamic nature. The interpretation of an utterance
affects context as well as being affected by it.
In this seminar, we want to try to get at some of the general
questions of discourse structure and meaning (What if we HAVE to
relate to the discourse, not just the sentential, level in our
analyses?) and the more specific questions having to do with anaphora,
tense and reference. The program for January is:
Jan. 10 B.Grosz
Jan. 17 J.Perry
Jan. 24 J.Perry
Jan. 31 K.Donnellan (visting from UCLA)
Later speakers: R.Perrault, D.Appelt, S.Soames(Princeton), H.Kamp(London)
P.Suppes, and (possibly) S.Weinstein(U. of Penn.)
******* Time: as in last term, 3.15 pm, Ventura Hall, Tuesday.*************
-------
-------
-------
∂05-Jan-84 1939 LAWS@SRI-AI.ARPA AIList Digest V2 #4
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Jan 84 19:37:47 PST
Date: Thu 5 Jan 1984 11:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V2 #4
To: AIList@SRI-AI
AIList Digest Thursday, 5 Jan 1984 Volume 2 : Issue 4
Today's Topics:
Course - PSU's First AI Course (continued)
----------------------------------------------------------------------
Date: 31 Dec 83 15:23:38-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 3/6 (First Exam)
Article-I.D.: psuvax.383
[The intent and application of the following three exams was described
in the previous digest issue. The exams were intended to look difficult
but to be fun to take. -- KIL]
******** ARTIFICIAL INTELLIGENCE -- First Exam ********
The field of Artificial Intelligence studies the modeling of human
intelligence in the hope of constructing artificial devices that display
similar behavior. This exam is designed to study your ability to model
artificial intelligence in the hope of improving natural devices that
display similar behavior. Please read ALL the questions first, introspect
on how an AI system might solve these problems, then simulate that system.
(Please do all work on separate sheets of paper.)
EASY PROBLEM:
The rules for differentiating polynomials can be expressed as follows:
IF the input is: (A * X ↑ 3) + (B * X ↑ 2) + (C * X ↑ 1) + (D * X ↑ 0)
THEN the output is:
(3 * A * X ↑ 2) + (2 * B * X ↑ 1) + (1 * C * X ↑ 0) + (0 * D * X ↑ -1)
(where "*" indicates multiplication and "↑" indicates exponentiation).
Note that all letters here indicate SYMBOLIC VARIABLES (as in algebra),
not NUMERICAL VALUES (as in FORTRAN).
1. Can you induce from this sample the general rule for polynomial
differentiation? Express that rule in English or Mathematical notation.
(The mathematicians in the group may have some difficulty here.)
2. Can you translate your "informal" specification of the differentiation
rule into a precise statement of an inference rule in a Physical Symbol
System? That is, define a set of objects and relations, a notation for
expressing them (hint: it doesn't hurt for the notation to look somewhat
like a familiar programming language which was invented to do mathematical
notation), and a symbolic transformation rule that encodes the rule of
inference representing differentiation.
3. Can you now IMPLEMENT your Physical Symbol System using some familiar
programming language? That is, write a program which takes as input a
data structure encoding your symbolic representation of a polynomial and
returns a data structure encoding the representation of its derivative.
(Hint as a check on infinite loops: this program can be done in six
or fewer lines of code. Don't be afraid to define a utility function
or two if it helps.)
SLIGHTLY HARDER PROBLEM:
Consider a world consisting of one block (a small wooden cubical block)
standing on the floor in the middle of a room. A fly is perched on the
South wall, looking North at the block. We want to represent the world
as seen by the fly. In the fly's world the only thing that matters is
the position of that block. Let's represent the world by a graph
consisting of a single node and no links to any other nodes. Easy enough.
4. Now consider a more complicated world. There are TWO blocks, placed
apart from each other along an East/West line. From the fly's point of
view, Block A (the western block) is TO-THE-LEFT-OF Block B (the eastern
block), and Block B has a similar relationship (TO-THE-RIGHT-OF) to
Block A. Draw your symbolic representation of the situation as a graph
with nodes for the blocks and labeled links for the two relationships
which hold between the blocks. (Believe it or not, you have just invented
the representation mechanism called a "semantic network".)
5. Now the fly moves to the northern wall, looking south. Draw the new
semantic network which represents the way the blocks look to him from his
new vantage point.
6. What you have diagrammed in the above two steps is a Physical Symbol
System: a symbolic representation of a situation coupled with a process
for making changes in the representation which correspond homomorphically
with changes in the real world represented by the symbol system.
Unfortunately, your symbol system does not yet have a concrete
representation for this changing process. To make things more concrete,
let's transform to another Physical Symbol System which can encode
EXPLICITLY the representation both of the WORLD (as seen by the fly)
and of HOW THE WORLD CHANGES when the fly moves.
Invent a representation for your semantic network using some familiar
programming language. Remember what is being modeled are OBJECTS (the
blocks) and RELATIONS between the objects. Hint: you might like to
use property lists, but please feel no obligations to do so.
7. Now the clincher which demonstrates the power of the idea that a
physical symbol system can represent PROCESSES as well as OBJECTS and
RELATIONS. Write a program which transforms the WORLD-DESCRIPTION for
FLY-ON-SOUTH-WALL to WORLD-DESCRIPTION for FLY-ON-NORTH-WALL. The
program should be a single function (with auxiliaries if you like)
which takes two arguments, the symbol SOUTH for the initial wall and
NORTH for target wall, uses a global symbol whose value is your semantic
network representing the world seen from the south wall, and returns
T if successful and NIL if not. As a side effect, the function should
CHANGE the symbolic structure representing the world so that afterward
it represents the blocks as seen by the fly from the north wall.
You might care to do this in two steps: first describing in English or
diagrams what is going on and then writing code to do it.
8. The world is getting slightly more complex. Now there are four
blocks, A and B as before (spread apart along an East/West line), C
which is ON-TOP-OF B, and D which is just to the north of (ie, in back
of when seen from the south) B. Let's see your semantic network in
both graphical and Lisp forms. The fly is on South wall, looking North.
(Note that we mean "directly left-of" and so on. A is LEFT-OF B but has
NO relation to D.)
9. Generalize the code you wrote for question 4 (if you haven't already)
so that it correctly transforms the world seen by the fly from ANY of
the four walls (NORTH, EAST, SOUTH, and WEST) to that seen from any other
(including the same) wall. What I mean by "generalize" is don't write
code that works only for the two-block or four-block worlds; code it so
it will work for ANY semantic network representing a world consisting of
ANY number of blocks with arbitrary relations between them chosen from
the set {LEFT-OF, RIGHT-OF, IN-FRONT-OF, IN-BACK-OF, ON-TOP-OF, UNDER}.
(Hint: if you are into group theory you might find a way to do this with
only ONE canonical transformation; otherwise just try a few examples
until you catch on.)
10. Up to now we have been assuming the fly is always right-side-up.
Can you do question 6 under the assumption that the fly sometimes perches
on the wall upside-down? Have your function take two extra arguments
(whose values are RIGHT-SIDE-UP or UPSIDE-DOWN) to specify the fly's
vertical orientation on the initial and final walls.
11. Up to now we have been modeling the WORLD AS SEEN BY THE FLY. If
the fly moves, the world changes. Why is this approach no good when
we allow more flies into the room and wish to model the situation from
ANY of their perspectives?
12. What can be done to fix the problem you pointed out above? That is,
redefine the "axioms" of your representation so it works in the "multiple
conscious agent" case. (Hint: new axioms might include new names for
the relations.)
13. In your new representation, the WORLD is a static object, while we
have functions called "projectors" which given the WORLD and a vantage
point (a symbol from the set {NORTH, EAST, SOUTH, WEST} and another from
the set {RIGHT-SIDE-UP, UPSIDE-DOWN}) return a symbolic description (a
"projection") of the world as seen from that vantage point. For the
reasons you gave in answer to question 11, the projectors CANNOT HAVE
SIDE EFFECTS. Write the projector function.
14. Now let's implement a perceptual cognitive model builder, a program
that takes as input a sensory description (a symbolic structure which
represents the world as seen from a particular vantage point) and a
description of the vantage point and returns a "static world descriptor"
which is invariant with respect to vantage point. Code up such a model
builder, using for input a semantic network of the type you used in
questions 6 through 10 and for output a semantic network of the type
used in questions 12 and 13. (Note that this function in nothing more
than the inverse of the projector from question 13.)
******** THAT'S IT !!! THAT'S IT !!! THAT'S IT !!! ********
SOME HELPFUL LISP FUNCTIONS
You may use these plus anything else discussed in class.
Function Argument description Return value Side effect
PUTPROP <symbol> <value> <property-name> ==> <value> adds property
GET <symbol> <property-name> ==> <value>
REMPROP <symbol> <property-name> ==> <value> removes property
***********************************************************************
-- Bob Giansiracusa
------------------------------
Date: 31 Dec 83 15:25:34-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 4/6 (Second Exam)
Article-I.D.: psuvax.384
1. (20) Why are you now sitting on this side of the room? Can you cite
an AI system which used a similar strategy in deciding what to do?
2. (10) Explain the difference between vs CHRONOLOGICAL and DEPENDENCY-
DIRECTED backtracking.
3. (10) Compare and contrast PRODUCTION SYSTEMS and SEMANTIC NETWORKS as
far as how they work, what they can represent, what type of problems are
well-suited for solution using that type of knowledge representation.
4. (20) Describe the following searches in detail. In detail means:
1) How do they work?? 2) How are they related to each other??
3) What are their advantages?? 4) What are their disadvantages??
Candidate methods:
1) Depth-first 2) Breadth-first
3) Hill-climbing 4) Beam search
5) Best-first 6) Branch-and-bound
7) Dynamic Programming 8) A*
5. (10) What are the characteristics of good generators for
the GENERATE and TEST problem-solving method?
6. (10) Describe the ideas behind Mini-Max. Describe the ideas behind
Alpha-Beta. How do you use the two of them together and why would you
want to??
7. (50) Godel's Incompleteness Theorem states that any consistent and
sufficiently complex formal system MUST express truths which cannot be
proved within the formal system. Assume that THIS theorem is true.
1. If UNPROVABLE, how did Godel prove it?
2. If PROVABLE, provide an example of a true but unprovable statement.
8. (40) Prove that this exam is unfinishable correctly; that is, prove
that this question is unsolvable.
9. (50) Is human behavior governed by PREDESTINATION or FREE-WILL? How
could you design a formal system to solve problems like that (that is, to
reason about "non-logical" concepts)?
10. (40) Assume only ONE question on this exam were to be graded -- the
question that is answered by the FEWEST number of people. How would you
decide what to do? Show the productions such a system might use.
11. (100) You will be given extra credit (up to 100 points) if by 12:10
pm today you bring to the staff a question. If YOUR question is chosen,
it will be asked and everybody else given 10 points for a correct answer.
YOU will be given 100 points for a correct answer MINUS ONE POINT FOR EACH
CORRECT ANSWER GIVEN BY ANOTHER CLASS MEMBER. What is your question?
-- Bob Giansiracusa
------------------------------
Date: 31 Dec 83 15:27:19-PST (Sat)
From: harpo!floyd!clyde!akgua!psuvax!bobgian @ Ucb-Vax
Subject: PSU's first AI course -- part 5/6 (Third Exam)
Article-I.D.: psuvax.385
1. What is the sum of the first N positive integers? That is, what is:
[put here the sigma-sign notation for the sum]
2. Prove that the your answer works for any N > 0.
3. What is the sum of the squares of the first N positive integers:
[put here the sigma-sign notation for the sum]
4. Again, prove it.
5. The proofs you gave (at least, if you are utilizing "traditional"
mathematical background,) are based on "mathematical induction".
Briefly state this principle and explain why it works.
6. If you are like most people, your definition will work only over the
domain of NATURAL NUMBERS (positive integers). Can this definition be
extended to work over ANY countable domain?
7. Consider the lattice of points in N-dimensional space having integer
valued coordinates. Is this space countable?
8. Write a program (or express an algorithm in pseudocode) which returns
the number of points in this space (the one in #7) inside an N-sphere of
radius R (R is a real number > 0).
9. The domains you have considered so far are all countable. The problem
solving methods you have used (if you're "normal") are based on
mathematical induction. Is it possible to extend the principle of
mathematical induction (and recursive programming) to NON-COUNTABLE
domains?
10. If you answered #9 NO, why not? If you answered it YES, how?
11. Problems #1 and #3 require you to perform INDUCTIVE REASONING
(a related but different use of the term "induction"). Discuss some of
the issues involved in getting a computer to perform this process
automatically. (I mean the process of generating a finite symbolic
representation which when evaluated will return the partial sum for
an infinite sequence.)
12. Consider the "sequence extrapolation" task: given a finite sequence
of symbols, predict the next few terms of the sequence or give a rule
which can generate ALL the terms of the sequence. Is this problem
uniquely solvable? Why or why not?
13. If you answered #12 YES, how would you build a computer program to
do so?
14. If you answered #12 NO, how could you constrain the problem to make
it uniquely solvable? How would you build a program to solve the
constrained problem?
15. Mankind is faced with the threat of nuclear annihilation. Is there
anything the field of AI has to offer which might help avert that threat?
(Don't just say "yes" or "no"; come up with something real.)
16. Assuming mankind survives the nuclear age, it is very likely that
ethical issues relating to AI and the use of computers will have very
much to do with the view the "person on the street" has of the human
purpose and role in the Universe. In what way can AI researchers plan
NOW so that these ethical issues are resolved to the benefit of the
greatest number of people?
17. Could it be that our (humankind's) purpose on earth is to invent
and build the species which will be the next in the evolutionary path?
Should we do so? How? Why? Why not?
18. Suppose you have just discovered the "secret" of Artificial
Intelligence; that is, you (working alone and in secret) have figured
out a way (new hardware, new programming methodology, whatever) to build
an artificial device which is MORE INTELLIGENT, BY ANY DEFINITION, BY
ANY TEST WHATSOEVER, that any human being. What do you do with this
knowledge? Explain the pros and cons of several choices.
19. Question #9 indicates that SO FAR all physical symbol systems have
dealt ONLY with discrete domains. Is it possible to generalize the
idea to continuous domains? Since many aspects of the human nervous
system function on a continuous (as opposed to discrete) basis, is it
possible that the invention of CONTINUOUS PHYSICAL SYMBOL SYSTEMS might
provide part of the key to the "secret of intelligence"?
20. What grade do you feel you DESERVE in this course? Why? What
grade do you WANT? Why? If the two differ, is there anything you
want to do to reduce the difference? Why or Why Not? What is it?
Why is it (or is it not) worth doing?
--
Spoken: Bob Giansiracusa
Bell: 814-865-9507
Bitnet: bobgian@PSUVAX1.BITNET
Arpa: bobgian%psuvax1.bitnet@Berkeley
CSnet: bobgian@penn-state.csnet
UUCP: allegra!psuvax!bobgian
USnail: Dept of Comp Sci, Penn State Univ, University Park, PA 16802
------------------------------
End of AIList Digest
********************
∂06-Jan-84 0833 RIGGS@SRI-AI.ARPA STAFF WHERABOUTS JAN 6
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Jan 84 08:33:21 PST
Date: Fri 6 Jan 84 08:33:04-PST
From: RIGGS@SRI-AI.ARPA
Subject: STAFF WHERABOUTS JAN 6
To: CSLI-Folks@SRI-AI.ARPA
A staff meeting is being held in Ventura Hall at 8:30 this morning
and everyone who usually answers a telephone here will be attending.
If you need to reach someone here urgently, please call 497-0628, and
allow the phone to ring several time.
Thank you.
Sandy
-------
∂06-Jan-84 1141 ETCHEMENDY@SRI-AI.ARPA Visitor
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Jan 84 11:40:55 PST
Date: Fri 6 Jan 84 11:39:38-PST
From: ETCHEMENDY@SRI-AI.ARPA
Subject: Visitor
To: csli-folks@SRI-AI.ARPA
This coming week Rohit Pahrik, a logician from CCNY/Brooklyn, will
be in the area to give some talks at IBM. Unfortunately we didn't find out
about his trip soon enough to schedule a CSLI sponsored talk. Since he
would like to meet some of the CSLI people, as I'm sure many of us would
like to meet him, he'll be visiting CSLI Thursday afternoon. After the
regularly scheduled events, CSLI will be taking Pahrik out to dinner. If
you'd like to come, let me know.
--John Etchemendy
-------
∂06-Jan-84 1429 ELYSE@SU-SCORE.ARPA Forsythe Lectures
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Jan 84 14:29:05 PST
Date: Fri 6 Jan 84 14:26:36-PST
From: Elyse Krupnick <ELYSE@SU-SCORE.ARPA>
Subject: Forsythe Lectures
To: faculty@SU-SCORE.ARPA
Stanford-Phone: (415) 497-9746
Ron Rivest will be giving the Forsythe Lectures the week of January 23.
The lectures are as follows:
Monday, Jan. 23 7:30 pm Skilling Auditorium
"Reflections on Artificial Intelligence"
Wednesday, Jan. 25 7:30 pm Skilling Auditorium
"Estimating a Probability Using Finite Memory"
Thursday, Jan. 26 4:15 pm Jordon 040
"An Algorithm for Minimizing Crossovers in VSLI Designs"
(For two-pin nets only, given a global routing)
AFTER THE WEDNESDAY LECTURE THERE WILL BE A DEPARTMENTAL RECEPTION AT
THE FACULTY CLUB. PLEASE SEND ME OR ELYSE THE NAME AND ADDRESS OF
PERSONS YOU WOULD LIKE TO SEE INVITED.
-Gene-
-------
∂06-Jan-84 1502 JF@SU-SCORE.ARPA computational number theory
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Jan 84 15:02:15 PST
Date: Fri 6 Jan 84 14:59:34-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: computational number theory
To: aflb.local@SU-SCORE.ARPA
i am interested in organizing a seminar at stanford on computational
number theory. i have in mind that a different person present a topic each
week--or take several consecutive weeks in order to cover a difficult
result. i think we should start with the basic primality testing and
factoring algorithms and then decide where to go from there--possibilities
include more recent work on those problems, applications of the basic
results to public-key cryptography, etc.
please let me know if you are interested by sending mail to jf@su-score.
if there is enough interest, we can decide on a good time to meet.
joan feigenbaum
-------
∂06-Jan-84 1524 GOLUB@SU-SCORE.ARPA AGENDA
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Jan 84 15:24:28 PST
Date: Fri 6 Jan 84 15:22:48-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: AGENDA
To: faculty@SU-SCORE.ARPA
TENTATIVE AGENDA
FACULTY MEETING
JANUARY 10, 1984
Room 146 - MJH
1. Presentation of Degree Candidates Walker 10 mins.
2. Selected Committee Reports
Ph.D. Admissions Reid 5 mins.
Forum Lenat 5 mins.
3. Financial Report Ullman/Scott 10 mins.
4. Computer Usage Policy Ullman 10 mins.
5. Math Science Program: Efron 7 mins.
Change of Name
6. Departmental Lecturers Golub 5 mins.
7. Course Changes Golub/Yearwood 5 mins.
8. General Announcements
9. New Business
Please send me any supporting materials as soon as possible.
-------
-------
∂06-Jan-84 1831 GOLUB@SU-SCORE.ARPA Faculty lunch
Received: from SU-SCORE by SU-AI with TCP/SMTP; 6 Jan 84 18:30:59 PST
Date: Fri 6 Jan 84 15:31:20-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Faculty lunch
To: faculty@SU-SCORE.ARPA
There'll be a faculty lunch on Tuesday at 12:15 in MJH. Does anyone
have a topic to discuss? See you then. GENE
-------
∂06-Jan-84 1931 WUNDERMAN@SRI-AI.ARPA Mitch Waldrop's Visit
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Jan 84 19:31:07 PST
Date: Fri 6 Jan 84 17:45:33-PST
From: WUNDERMAN@SRI-AI.ARPA
Subject: Mitch Waldrop's Visit
To: CSLI-Principals@SRI-AI.ARPA
cc: Wunderman@SRI-AI.ARPA
Dr. Mitch Waldrop of Science Magazine will be here next Thursday and
Friday (1/12-13) to gather material about us for a four-part article
he is writing on A-I. He is interested in interviewing people in
depth on natural language and what we are doing here at CSLI. He
will be here all day Thursday, attending regular Center Day activities,
then on Friday will be here to meet with individuals on a half-hour
basis.
If you are interested in meeting with him, there is a sign-up sheet
available in my directory <Wunderman>Waldrop which you can use to
enter your name in the slot of your choice. If you have any questions
please call me at 497-1131.
Thanks! --Pat
-------
∂07-Jan-84 0229 RESTIVO@SU-SCORE.ARPA PROLOG Digest V2 #1
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Jan 84 02:29:16 PST
Date: Friday, January 6, 1984 12:49PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V2 #1
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Saturday, 7 Jan 1984 Volume 2 : Issue 1
Today's Topics:
Administration
Query - LP Database & LP for CAD & Lisp vs. LP,
Implementations - Regular Clauses - (nf),
LP Library - UpDate & Additions
----------------------------------------------------------------------
Date: Fri 6 Jan 84 11:17:53-PST
From: Chuck Restivo <Restivo@SU-SCORE.ARPA>
Subject: Administration
I would like to take this opportunity to thank the
Stanford community for making the resources available
for the Prolog Digest. Their continued support is
much appreciated.
Volume One of Prolog Digest has been archived on-line in
{SU-SCORE}'s <Prolog> directory. It is available as
Archive←Volume1←I1-57.Txt
The current volume will be available as Archive.Txt in
<Prolog>
------------------------------
Date: 5 Dec 83 15:05:32-PST (Mon)
From: decvax!mulga!munnari!Isaac @ Ucb-Vax
Subject: Logic Programming Database
If anyone out there has a fairly complete and substantial logic
programming database in UNIX refer type format and is willing
to share it with us, could you please mail me and we will take
it from there. Thank you.
-- Isaac Balbin
Dept. of CS, Melbourne University
CSNet: decvax!mulga!Isaac
------------------------------
Date: Sun, 4 Dec 83 18:07:51 PST
From: Tulin Mangir <Tulin@UCLA-CS>
Subject: LP Applications for CAD in LSI/VLSI
I am preparing a tutorial and a current bibliography,for IEEE,
of the work in the area of expert system applications to CAD
and computer aided testing as well as computer aided processing.
Specific emphasis is on LSI/VLSI design, testing and processing.
I would like this material to be as complete and as current as
we can all make. So, if you have any material in these areas
that you would like me to include in the notes, ideas about
representation of structure, knowledge, behaviour of digital
circuits, etc., references you know of, please send me a msg.
Thank you.
-- Tulin Mangir
Arpa: CS.Tulin@UCLA-CS
(213) 825-2692
825-4943 (secretary)
------------------------------
Date: Saturday, 3-Dec-83 01:01:08-GMT
From: O'Keefe HPS (on ERCC DEC-10) <OKeefe.R.A.@EDXA>
Subject: Will Prolog ever replace Lisp?
No. Common Lisp will though.
I don't know of any problem areas other than process control
to which APL has not been applied. APL is available on a
wide variety of machines from huge mainframes to soak-the-poor
micros. There is at least one (I think two) time-sharing
service dedicated to APL and APL only, and IBM are selling a
small computer solely on its abilty to run APL. If Prolog
achieves that sort of success, I for one will be tickled pink.
(Has Prolog got an ACM SIG yet? Nope. APL has though.)
I think that applicative languages have a very large role to
play in practical computing, and I'm inclined to put Prolog in
that camp, rather than in say the specification languages.
There are lots of algorithms which can be coded as efficiently
in applicative or logic-based languages as in F-----N or A-A
(can the dead sue for libel?). For instance, Warshall's algorithm
for taking the transitive closure of a relation can be coded in
very few lines of pure Prolog (there are other things to go in the
GRAPHS package before I release it). There are also lots of
algorithms which can't be implemented as efficiently if you haven't
got destructive in-place assignment. For instance, topological
sort. The standard algorithm for topological sort takes O(|V|+|E|)
time on a graph with V vertices and E edges. The best I have been
able to achieve in Prolog is O(|V|↑2) for dense graphs and
O(|V|.lg|V|) for sparse graphs. The problem is decrementing the
counters. I'm afraid I even see a continuing role for assembler,
at least for the next 10 years.
(Just before someone quotes my SigPlan article back at me, I only
said that "assignment is essential" was an odd thing for Guttierez
to say at a functional programming conference. You *can* do without
assignment (= data base hacking) for a lot of things, and for a lot
of AI tasks. A factor of N/lgN can sometimes be a small price to
pay for clarity.)
------------------------------
Date: 27 Dec 83 19:25:51-PST (Tue)
From: pur-ee!uiucdcs!Marcel @ Ucb-Vax
Subject: What I can't do with Regular Clauses - (nf)
Here's a problem I'd like some comments on. If you can solve it,
please send this Digest your solution; if you can't, please let
me know why you found it impossible. First I'll present the
problem (of my own devising), then my comments, for your critique.
Suppose you are shown two lamps, 'a' and 'b', and you
are told that, at any time,
1. at least one of 'a' or 'b' is on.
2. whenever 'a' is on, 'b' is off.
3. each lamp is either on or off.
Without using an exhaustive generate-and-test strategy,
write a Prolog program to enumerate the possible on-off
configurations of the two lamps.
If it were not for the exclusion of dumb-search-and-filter
solutions, this problem would be trivial. The exclusion has
left me baffled, even though the problem seems so logical.
Check me on my thinking about why it's so difficult.
1. The first constraint (one or both lamps on) is not regular
Horn clause logic. I would like to be able to state (as a
fact) that
on(a) OR on(b)
but since regular Horn clauses are restricted to at most one
positive literal I have to recode this. I cannot assert two
independent facts 'on(a)', 'on(b)' since this suggests that
'a' and 'b' are always both on. I can however express it in
regular Horn clause form:
not on(b) IMPLIES on(a)
not on(a) IMPLIES on(b)
As it happens, both of these are logically equivalent to
the original disjunction. So let's write them as Prolog:
on(a) :- not on(b).
on(b) :- not on(a).
First, this is not what the disjunction meant. These rules
say that 'a' is provably on only when 'b' is not provably
on, and vice versa, when in fact 'a' could be on no matter
what 'b' is.
Second, a question ?- on(X). will result in an endless loop.
Third, 'a' is not known to be on except when 'b' is not known
to be on (which is not the same as when 'b' is known to be off).
This sounds as if the closed-world assumption might let us get
away with not being able to prove anything (if we can't prove
something we can always assume its negation). Not so. We do not
know Anything about whether 'a' or 'b' are on OR off; we only
know about constraints Relating their states. Hence we cannot
even describe their possible states, since that would require
filling in (by speculative hypothesis) the states of the lamps.
What is wanted is a non-regular Horn clause, but some of the
nice properties of Logic Programming (e.g. completeness and
consistency under the closed-world assumption, alias a reasonable
negation operator) do not apply to non-regular Horn clauses.
2. The second constraint (whenever 'a' is on, 'b' is off) shares
some of the above problems, and a new one. We want to say
on(a) IMPLIES not on(b), or not on(b) :- on(a).
but this is not possible in Prolog; we have to say it in what I
feel to be a rather contrived manner, namely
on(b) :- on(a), !, fail.
Unfortunately this makes no sense at all to a theoretician. It
is trying to introduce negative information, but under the
closed-world assumption, saying that something is Not true is
just the same as not saying it at all, so the clause is
meaningless.
Alternative: define a new predicate off(X) which is complementary
to on(X). That is the conceptualization suggested by the third
problem constraint.
3. off(X) :- not on(X).
on(X) :- not off(X).
This idea has all the problems of the first constraint, including
the creation of another endless loop.
It seems this problem is beyond the capabilities of present-day
logic programming. Please let me know if you can find a solution,
or if you think my analysis of the difficulties is inaccurate.
-- Marcel Schoppers
Univ. of Illinois at Urbana-Champaign
CSNet: {pur-ee|ihnp4}!uiUUCDCS!Marcel
------------------------------
Date: Fri 6 Jan 84 11:14:52-PST
From: Chuck Restivo <Restivo@SU-SCORE.ARPA>
Subject: LP Library UpDate
Thanks to Richard O'Keefe several new and updated LP utilities
have been added to {SU-SCORE}PS:<Prolog> .
Ask.Pl Purpose: updated version
DCSG.Pl Purpose: Preprocessor for definite clause slash
grammars
DCSG.DC Purpose: Two examples that use DCSG.Pl
Heaps.Pl Purpose: code for binary heaps
Helper.Pl Purpose: Print extracts from help files
{ O'Keefe and Pereira }
Medic.Pl Purpose: updated version
TrySee.Pl Purpose: Searches through several directories,
extensions to find a file
Koenraad Lecot submitted a comprehensive bibliography for
LP topics, it is available on {SU-SCORE}'s <Prolog>
directory as:
Prolog←Bib←Lecot.Doc
Comments, suggestions etc. might be mailed to Koen@UCLA-CS,
updates will probably be announced here.
Prolog-Bib.Doc and Prolog-Bib.Press have been renamed
Prolog←Bib←Pereira.Doc and Prolog←Bib←Pereira.Press
Richard O'Keefe submitted a paper on a Polymorphic Type Scheme
for Prolog. It, along with it's Scribe library file are
available on <Prolog> as
OKeefe←TPL.Mss and OKeefe←MyScrb.Lib
Any questions about access to these can be directed to
my. I have a limited number of hard copies that can
be mailed for those with no access to the electronic
networks.
As always, if you have something interesting, useful,
*send* it in, please !
-- ed
------------------------------
End of PROLOG Digest
********************
∂07-Jan-84 1726 BRODER@SU-SCORE.ARPA Next AFLB talk(s)
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Jan 84 17:25:59 PST
Date: Sat 7 Jan 84 17:24:27-PST
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)
To: aflb.all@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787
N E X T A F L B T A L K (S)
First AFLB of 1984 !!!
1/12/84 - Prof. Richard Karp (U.C. Berkeley)
"A Fast Parallel Algorithm for the Maximal independent Set Problem"
We present an algorithm for constructing a maximal independent set in
an undirected graph. The algorithm runs in polylog time and uses a
polynomial-bounded number of processors. The algorithm is based on a
number of techniques that may be of independent interest. These
include the use of a "dynamic pigeonhole principle" and the use of
balanced incomplete block designs to replace random sampling by
deterministic sampling. As a byproduct of the main result we derive a
polylog-time algorithm using a polynomial-bounded number of processors
to construct, where possible, a truth assignment satisfying a given
CNF formula with two literals per clause. The talk is based on joint
work with Avi Wigderson.
******** Time and place: Jan. 12, 12:30 pm in MJ352 (Bldg. 460) *******
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Regular AFLB meetings are on Thursdays, at 12:30pm, in MJ352 (Bldg.
460).
If you have a topic you would like to talk about in the AFLB seminar
please tell me. (Electronic mail: broder@su-score.arpa, Office: CSD,
Margaret Jacks Hall 325, (415) 497-1787) Contributions are wanted and
welcome. Not all time slots for the winter quarter have been filled
so far.
For more information about future AFLB meetings and topics you might
want to look at the file [SCORE]<broder>aflb.bboard .
- Andrei Broder
-------
∂09-Jan-84 1215 PATASHNIK@SU-SCORE.ARPA student meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 9 Jan 84 12:15:33 PST
Date: Mon 9 Jan 84 12:04:55-PST
From: Student Bureaucrats <PATASHNIK@SU-SCORE.ARPA>
Subject: student meeting
To: students@SU-SCORE.ARPA
cc: bureaucrat@SU-SCORE.ARPA, faculty@SU-SCORE.ARPA, secretaries@SU-SCORE.ARPA
Reply-To: bureaucrat@score
We will have a student meeting at noon on Wednesday, Jan. 18th in 420-041
(the basement of the Psychology building). If there's something you'd
like to see on the agenda, please send us a message.
--Eric Berglund and Oren Patashnik, bureaucrats
-------
∂09-Jan-84 1414 MOLENDER@SRI-AI.ARPA Talk on ALICE, 1/23, 4:30pm, EK242
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Jan 84 14:14:20 PST
Date: Mon 9 Jan 84 14:09:59-PST
From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
Subject: Talk on ALICE, 1/23, 4:30pm, EK242
To: AIC-Associates: ;
cc: CSLI-Friends@SRI-AI.ARPA
SUBJECT - ALICE
SPEAKER - John Darlington, Department of Computing, Imperial College,
London
WHEN - Monday, January 23, 4:30pm
WHERE - AIC Conference Room, EK242
ALICE
ALICE: A parallel graph-reduction machine for declarative and other
languages.
ABSTRACT
Alice is a highly parallel-graph reduction machine being designed and
built at Imperial College. Although designed for the efficient
execution of declarative languages, such as functional or logic
languages, ALICE is general purpose and can execute sequential
languages also.
This talk will describe the general model of computation, extended
graph reduction, that ALICE executes, outline how different languages
can be supported by this model, and describe the concrete architecture
being constructed. A 24-processor prototype is planned for early
1985. This will give a two-orders-of-magnitude improvement over a VAX
11/750 for derclarative languages. ALICE is being constructed out of
two building blocks, a custom-designed switching chip and the INMOS
transputer. So far, compilers for a functional language, several logic
languages, and LISP have been constructed.
-------
∂09-Jan-84 1455 KJB@SRI-AI.ARPA Afternoon seminar for Winter Quarter
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Jan 84 14:55:16 PST
Date: Mon 9 Jan 84 14:53:45-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Afternoon seminar for Winter Quarter
To: csli-folks@SRI-AI.ARPA
The afternoon CSLI seminar for the coming quarter is on situation
semantics. I will teach the first five weeks. Then we will have five
guest lectures discussing applications to natural language. The book
"Situations and Attitudes" by Perry and me will be the text, but the
material will go beyond what is presented there. The seminar is a
course in the philosophy department, so students can get credit for
it. It will meet Thursday afternoons at 2:15. The initial meeting
will be in Redwood Hall, but hopefully the seminar will be small
enough to move to the seminar room in Ventura after a week or two.
Jon Barwise
-------
∂09-Jan-84 1543 REGES@SU-SCORE.ARPA TA's for Winter
Received: from SU-SCORE by SU-AI with TCP/SMTP; 9 Jan 84 15:43:41 PST
Date: Mon 9 Jan 84 15:34:36-PST
From: Stuart Reges <REGES@SU-SCORE.ARPA>
Subject: TA's for Winter
To: faculty@SU-SCORE.ARPA
Office: Margaret Jacks 210, 497-9798
Mark Crispin eliminated some of the mail forwarding entries on January 1st, and
the faculty distribution list has had some problems as a result. I am sending
this out to test it.
While I have your attention, I thought I'd mention that TA applications are
down for Winter Quarter, especially in the upper division courses. If any of
you know PhD students who might be looking to satisfy their TA requirement now,
please send them my way. I have very few applications for anything numbered
above 146.
-------
∂09-Jan-84 1629 ALMOG@SRI-AI.ARPA reminder on WHY DISCOURSE WONT GO AWAY
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Jan 84 16:29:20 PST
Date: 9 Jan 1984 1625-PST
From: Almog at SRI-AI
Subject: reminder on WHY DISCOURSE WONT GO AWAY
To: csli-friends at SRI-AI
Tomorrow is the first meeting of the seminar:
WHY DISCOURSE WONT GO AWAY
We meet at 3.15 pm Ventura Hall.
The speaker is B.Grosz from CSLI&SRI.
DISCOURSE STRUCTURE AND REFERRING EXPRESSIONS
Barbara J. Grosz
The utterances of a discourse combine into units that are typically
larger than a single utterance, but smaller than the complete
discourse. The utterances that contribute to a particular unit do not
necessarily occur in a linear sequence. It is common both for
contiguous utterances to belong to different units and for
noncontiguous utterances to belong to the same unit. An individual
unit exhibits both internal coherence and coherence with other units.
That is, discourses have been shown to have two levels of coherence:
local coherence (tying the individual utterances in a units) and
global coherence (relating the different units to one another).
Certain uses of definite descriptions and pronouns have been shown to
interact differently within these two levels. The presentation will
examine several different samples of discourse, review some work
within AI that treats various of these issues, and describe some
important open problems.
-------
-------
∂09-Jan-84 1641 LAWS@SRI-AI.ARPA AIList Digest V2 #5
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Jan 84 16:41:38 PST
Date: Mon 9 Jan 1984 14:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V2 #5
To: AIList@SRI-AI
AIList Digest Tuesday, 10 Jan 1984 Volume 2 : Issue 5
Today's Topics:
AI and Weather Forecasting - Request,
Expert Systems - Request,
Pattern Recognition & Cognition,
Courses - Reaction to PSU's AI Course,
Programming Lanuages - LISP Advantages
----------------------------------------------------------------------
Date: Mon 9 Jan 84 14:15:13-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: AI and Weather Forecasting
I have been talking with people interested in AI techniques for
weather prediction and meteorological analysis. I would appreciate
pointers to any literature or current work on this subject, especially
* knowledge representations for spatial/temporal reasoning;
* symbolic description of weather patterns;
* capture of forecasting expertise;
* inference methods for estimating meteorological variables
from (spatially and temporally) sparse data;
* methods of interfacing symbolic knowledge and heuristic
reasoning with numerical simulation models;
* any weather-related expert systems.
I am aware of some recent work by Gaffney and Racer (NBS Trends and
Applications, 1983) and by Taniguchi et al. (6th Pat. Rec., 1982),
but I have not been following this field. A bibliography or guide
to relevant literature would be welcome.
-- Ken Laws
------------------------------
Date: 5 January 1984 13:47 est
From: RTaylor.5581i27TK at RADC-MULTICS
Subject: Expert Systems Info Request
Hi, y'all...I have the names (hopefully, correct) of four expert
systems/tools/environments (?). I am interested in the "usual": that
is, general info, who to contact, feedback from users, how to acquire
(if we want it), etc. The four names I have are: RUS, ALX, FRL, and
FRED.
Thanks. Also, thanks to those who provided info previously...I have
info (similar to that requested above) on about 15 other
systems/tools/environments...some of the info is a little sketchy!
Roz (aka: rtaylor at radc-multics)
------------------------------
Date: 3 Jan 84 20:38:52-PST (Tue)
From: decvax!genrad!mit-eddie!rh @ Ucb-Vax
Subject: Re: Loop detection and classical psychology
Article-I.D.: mit-eddi.1114
One of the truly amazing things about the human brain is that its pattern
recognition capabilities seem limitless (in extreme cases). We don't even
have a satisfactory way to describe pattern recognition as it occurs in
our brains. (Well, maybe we have something acceptable at a minimum level.
I'm always impressed by how well dollar-bill changers seem to work.) As
a friend of mine put it, "the brain immediately rejects an infinite number
of wrong answers," when working on a problem.
Randwulf (Randy Haskins); Path= genrad!mit-eddie!rh
------------------------------
Date: Fri 6 Jan 84 10:11:01-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: PSU's First AI Course
Wow! I actually think it's kind of neat (but, of course, very wacko). I
particularly like making people think about the ethical and philosphical
considerations at the same time as their thinking about minimax, etc.
------------------------------
Date: Wed 4 Jan 84 17:23:38-PST
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: AIList Digest V2 #1
[in response to Herb Lin's questions]
Well, 2 more or less answers 1. One of the main reasons why Lisp and not C
is the language of many people's choice for AI work is that you can easily cons
up at run time a piece of data which "is" the next action you are going to
take. In most languages you are restricted to choosing from pre-written
actions, unless you include some kind of interpreter right there in your AI
program. Another reason is that Lisp has all sorts of extensibility.
As for 3, the obvious response is that in Pascal control has to be routed to an
IF statement before it can do any good, whereas in a production system, control
automatically "goes" to any production that is applicable. This is highly
over-simplified and may not be the answer you were looking for.
- Richard
------------------------------
Date: Friday, 6 Jan 1984 13:10-PST
From: narain@rand-unix
Subject: Reply to Herb Lin: Why is Lisp good for AI?
A central issue in AI is knowledge representation. Experimentation with a
new KR scheme often involves defining a new language. Often, definitions
and meanings of new languages are conceived of naturally in terms of
recursive (hierarchical) structures. For instance, many grammars of English-
like frontends are recursive, so are production system definitions, so
are theorem provers.
The abstract machinery underlying Lisp, the Lambda Calculus, is also
inherently recursive, yet very simple and powerful. It involves the notion
of function application to symbolic expressions. Functions can themselves
be symbolic expressions. Symbolic expressions provide a basis for SIMPLE
implementation and manipulation of complex data/knowledge/program
structures.
It is therefore possible to easily interpret new language primitives in
terms of Lisp's already very high level primitives. Thus, Lisp is a great
"machine language" for AI.
The usefulness of a well understood, powerful, abstract machinery of the
implementation language is probably more obvious when we consider Prolog.
The logical interpretation of Prolog programs helps considerably in their
development and verification. Logic is a convenient specification language
for a lot of AI, and it is far easier to 'compile' those specifications
into a logic language like Prolog than into Pascal. For instance, take
natural language front ends implemented in DCGs or database/expert-system
integrity and redundancy constraints.
The fact that programs can be considered as data is not true only of Lisp.
Even in Pascal you can analyze a Pascal program. The nice thing in Lisp,
however, is that because of its few (but very powerful) primitives,
programs tend to be simply structured and concise (cf. claims in recent
issues of this bulletin that Lisp programs were much shorter than Pascal
programs). So naturally it is simpler to analyze Lisp programs in Lisp
than it is to analyze Pascal programs in Pascal.
Of course, Lisp environments have evolved for over two decades and
contribute no less to its desirability for AI. Some of the nice features
include screen-oriented editors, interactiveness, debugging facilities, and
an extremely simple syntax.
I would greatly appreciate any comments on the above.
Sanjai Narain
Rand.
------------------------------
Date: 6 Jan 84 13:20:29-PST (Fri)
From: ihnp4!mit-eddie!rh @ Ucb-Vax
Subject: Re: Herb Lin's questons on LISP etc.
Article-I.D.: mit-eddi.1129
One of the problems with LISP, however, is it does not force one
to subscribe the code of good programming practices. I've found
that the things I have written for my bridge-playing program (over
the last 18 months or so) have gotten incredibly crufty, with
some real brain-damaged patches. Yeah, I realize it's my fault;
I'm not complaining about it because I love LISP, I just wanted
to mention some of the pitfalls for people to think about. Right
now, I'm in the process of weeding out the cruft, trying to make
it more clearly modular, decrease the number of similar functions
and so on. Sigh.
Randwulf (Randy Haskins); Path= genrad!mit-eddie!rh
------------------------------
Date: 7 January 1984 15:08 EST
From: Herb Lin <LIN @ MIT-ML>
Subject: my questions of last Digest on differences between PASCAL
and LISP
So many people replied that I send my thanks to all via the list. I
very much appreciate the time and effort people put into their
comments.
------------------------------
End of AIList Digest
********************
∂10-Jan-84 1139 @SRI-AI.ARPA:TW@SU-AI CS377 Talkware seminar Monday 1/16: Gould and Finzer (PARC LRG)
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 11:38:53 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Tue 10 Jan 84 11:35:06-PST
Date: 10 Jan 84 1131 PST
From: Terry Winograd <TW@SU-AI>
Subject: CS377 Talkware seminar Monday 1/16: Gould and Finzer (PARC LRG)
To: "@377.DIS[1,TW]"@SU-AI
Date: MONDAY January 16 * Note change of day *
Speaker:Laura Gould and William Finzer (Xerox PARC LRG)
Topic: Programming by Rehearsal
Time: 2:15-4
Place: To be announced later this week
Abstract:
Programming by Rehearsal is the name given to a graphical programming
environment, devised for the use of teachers or curriculum designers who want
to construct interactive, instructional activities for their students to use. The
process itself relies heavily on interactive graphics and allows designers to react
immediately to their emerging products by showing them, at all stages of
development, exactly what their potential users will see. The process is quick,
easy, and fun to engage in; a simple activity may be constructed in less than
half an hour.
In using the system, designers rely heavily on a set of predefined 'performers',
each of which comes equipped with a set of predefined actions; each action is
associated with a specific 'cue'. A designer can 'audition' a performer to see how
it behaves by selecting its various cues and watching its performance. The
system also allows the designer to construct new performers and to teach them
new cues. A large help system provides procedural as well as descriptive
assistance.
Programming by Rehearsal is implemented in Smalltalk-80 and runs on a Dorado.
The current system contains eighteen predefined performers from which several
dozen productions have been made, some by non-programmers. A video tape will
be shown which illustrates not only the productions but also the process by
which they were created.
∂10-Jan-84 1141 GOLUB@SU-SCORE.ARPA Today's events
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Jan 84 11:41:37 PST
Date: Tue 10 Jan 84 11:32:45-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Today's events
To: faculty@SU-SCORE.ARPA
Lunch at 12:15, Faculty meeting , 2:30.
Gene
-------
∂10-Jan-84 1336 LAWS@SRI-AI.ARPA AIList Digest V2 #6
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 13:34:22 PST
Date: Tue 10 Jan 1984 09:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V2 #6
To: AIList@SRI-AI
AIList Digest Tuesday, 10 Jan 1984 Volume 2 : Issue 6
Today's Topics:
Humor,
Seminars - Programming Styles & ALICE & 5th Generation,
Courses - Geometric Data Structures & Programming Techniques & Linguistics
----------------------------------------------------------------------
Date: Mon, 9 Jan 84 08:45 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: An AI Joke
Last week a cartoon appeared in our local (Rochester NY) paper. It was
by a fellow named Toles, a really excellent editorial cartoonist who
works out of, of all places, Buffalo:
Panel 1:
[medium view of the Duckburg Computer School building. A word balloon
extends from one of the windows]
"A lot of you wonder why we have to spend so much time studying these
things."
Panel 2:
[same as panel 1]
"It so happens that they represent a lot of power. And if we want to
understand and control that power, we have to study them."
Panel 3:
[interior view of a classroom full of personal computers. At right,
several persons are entering. At left, a PC speaks]
". . .so work hard and no talking. Here they come."
Tickler (a mini-cartoon down in the corner):
[a lone PC speaks to the cartoonist]
"But I just HATE it when they touch me like that. . ."
Mark
------------------------------
Date: Sat, 7 Jan 84 20:02 PST
From: Vaughan Pratt <pratt@navajo>
Subject: Imminent garbage collection of Peter Coutts. :=)
[Here's another one, reprinted from the SU-SCORE bboard. -- KIL]
Les Goldschlager is visiting us on sabbatical from Sydney University, and
stayed with us while looking for a place to stay. We belatedly pointed him
at Peter Coutts, which he immediately investigated and found a place to
stay right away. His comment was that no pointer to Peter Coutts existed
in any of the housing assistance services provided by Stanford, and that
therefore it seemed likely that it would be garbage collected soon.
-v
------------------------------
Date: 6 January 1984 23:48 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Seminar on Programming Styles in AI
DATE: Thursday, January 12, 1984
TIME: 3.45 p.m. Refreshments
4.00 p.m. Lecture
PLACE: NE43-8th Floor, AI Playroom
PROGRAMMING STYLES IN ARTIFICIAL INTELLIGENCE
Herbert Stoyan
University of Erlangen, West Germany
ABSTRACT
Not much is clear about the scientific methods used in AI research.
Scientific methods are sets of rules used to collect knowledge about the
subject being researched. AI is an experimental branch of computer science
which does not seem to use established programming methods. In several
works on AI we can find the following method:
1. develop a new convenient programming style
2. invent a new programming language which supports the new style
(or embed some appropriate elements into an existing AI language,
such as LISP)
3. implement the language (interpretation as a first step is
typically less efficient than compilation)
4. use the new programming style to make things easier.
A programming style is a way of programming guided by a speculative view of
a machine which works according to the programs. A programming style is
not a programming method. It may be detected by analyzing the text of a
completed program. In general, it is possible to program in one
programming language according to the principles of various styles. This
is true in spite of the fact that programming languages are usually
designed with some machine model (and therefore with some programming
style) in mind. We discuss some of the AI programming styles. These
include operator-oriented, logic-oriented, function-oriented, rule-
oriented, goal-oriented, event-oriented, state-oriented, constraint-
oriented, and object-oriented. (We shall not however discuss the common
instruction-oriented programming style). We shall also give a more detailed
discussion of how an object-oriented programming style may be used in
conventional programming languages.
HOST: Professor Ramesh Patil
------------------------------
Date: Mon 9 Jan 84 14:09:07-PST
From: Laws@SRI-AI
Subject: SRI Talk on ALICE, 1/23, 4:30pm, EK242
ALICE: A parallel graph-reduction machine for declarative and other
languages.
SPEAKER - John Darlington, Department of Computing, Imperial College,
London
WHEN - Monday, January 23, 4:30pm
WHERE - AIC Conference Room, EK242
[This is an SRI AI Center talk. Contact Margaret Olender at
MOLENDER@SRI-AI or 859-5923 if you would like to attend. -- KIL]
ABSTRACT
Alice is a highly parallel-graph reduction machine being designed and
built at Imperial College. Although designed for the efficient
execution of declarative languages, such as functional or logic
languages, ALICE is general purpose and can execute sequential
languages also.
This talk will describe the general model of computation, extended
graph reduction, that ALICE executes, outline how different languages
can be supported by this model, and describe the concrete architecture
being constructed. A 24-processor prototype is planned for early
1985. This will give a two-orders-of-magnitude improvement over a VAX
11/750 for derclarative languages. ALICE is being constructed out of
two building blocks, a custom-designed switching chip and the INMOS
transputer. So far, compilers for a functional language, several logic
languages, and LISP have been constructed.
------------------------------
Date: 9 Jan 1984 1556-PST
From: OAKLEY at SRI-CSL
Subject: SRI 5th Generation Talk
Japan's 5th Generation Computer Project: Past, Present, and Future
-- personal observations by a researcher of
ETL (ElectroTechnical Laboratory)
Kokichi FUTATSUGI
Senior Research Scientist, ETL
International Fellow, SRI-CSL
Talk on January 24, l984, in conference room EL369 at 10:00am.
[This is an SRI Computer Science Laboratory talk. Contact Mary Oakley
at OAKLEY@SRI-AI or 859-5924 if you would like to attend. -- KIL]
1 Introduction
* general overview of Japan's research activities in
computer science and technology
* a personal view
2 Past -- pre-history of ICOT (the Institute of New Generation
ComputerTechnology)
* ETL's PIPS project
* preliminary research and study activities
* the establishment of ICOT
3 Present -- present activities
* the organization of ICOT
* research activities inside ICOT
* research activities outside ICOT
4 Future -- ICOT's plans and general overview
* ICOT's plans
* relations to other research activities
* some comments
------------------------------
Date: Thu 5 Jan 84 16:41:57-PST
From: Martti Mantyla <MANTYLA@SU-SIERRA.ARPA>
Subject: Data Structures & Algorithms for Geometric Problems
[Reprinted from the SU-SCORE bboard.]
NEW COURSE:
EE392 DATA STRUCTURES AND ALGORITHMS
FOR GEOMETRIC PROBLEMS
Many problems arising in science and engineering deal with geometric
information. Engineering design is most often spatial activity, where a
physical shape with certain desired properties must be created. Engineering
analysis also uses heavily information on the geometric form of the object.
The seminar Data Structures and Algorithms for Geometric Problems deals with
problems related to representing and processing data on the geometric shape of
an object in a computer. It will concentrate on practically interesting
solutions to tasks such as
- representation of digital images,
- representation of line figures,
- representation of three-dimensional solid objects, and
- representation of VLSI circuits.
The point of view taken is hence slightly different from a "hard-core"
Computational Geometry view that puts emphasis on asymptotic computational
complexity. In practice, one needs solutions that can be implemented in a
reasonable time, are efficient and robust enough, and can support an
interesting scope of applications. Of growing importance is to find
representations and algorithms for geometry that are appropriate for
implementation in special hardware and VLSI in particular.
The seminar will be headed by
Dr. Martti Mantyla (MaM)
Visiting Scholar
CSL/ERL 405
7-9310
MANTYLA@SU-SIERRA.ARPA
who will give intruductory talks. Guest speakers of the seminar include
well-known scientists and practitioners of the field such as Dr. Leo Guibas and
Dr. John Ousterhout. Classes are held on
Tuesdays, 2:30 - 3:30
in
ERL 126
First class will be on 1/10.
The seminar should be of interest to CS/EE graduate students with research
interests in computer graphics, computational geometry, or computer
applications in engineering.
------------------------------
Date: 6 Jan 1984 1350-EST
From: KANT at CMU-CS-C.ARPA
Subject: AI Programming Techniques Course
[Reprinted from the CMUC bboard.]
Announcing another action-packed AI mini-course!
Starting soon in the 5409 near you.
This course covers a variety of AI programming techniques and languages.
The lectures will assume a background equivalent to an introductory AI course
(such as the undergraduate course 15-380/381 or the graduate core course
15-780.) They also assume that you have had at least a brief introduction to
LISP and a production-system language such as OPS5.
15-880 A, Artificial Intelligence Programming Techniques
MW 2:30-3:50, WeH 5409
T Jan 10 (Brief organizational meeting only)
W Jan 11 LISP: Basic Pattern Matching (Carbonell)
M Jan 16 LISP: Deductive Data Bases (Steele)
W Jan 18 LISP: Basic Control: backtracking, demons (Steele)
M Jan 23 LISP: Non-Standard Control Mechanisms (Carbonell)
W Jan 25 LISP: Semantic Grammar Interpreter (Carbonell)
M Jan 30 LISP: Case-Frame interpreter (Hayes)
W Feb 1 PROLOG I (Steele)
M Feb 6 PROLOG II (Steele)
W Feb 8 Reason Maintenance and Comparison with PROLOG (Steele)
M Feb 13 AI Programming Environments and Hardware I (Fahlman)
W Feb 15 AI Programming Environments and Hardware II (Fahlman)
M Feb 20 Schema Representation Languages I (Fox)
W Feb 22 Schema Representation Languages II (Fox)
W Feb 29 User-Interface Issues in AI (Hayes)
M Mar 5 Efficient Game Playing and Searching (Berliner)
W Mar 7 Production Systems: Basic Programming Techniques (Kant)
M Mar 12 Production Systems: OPS5 Programming (Kant)
W Mar 14 Efficiency and Measurement in Production Systems (Forgy)
M Mar 16 Implementing Diagnostic Systems as Production Systems (Kahn)
M Mar 26 Intelligent Tutoring Systems: GRAPES and ACT Implementations
(Anderson)
W Mar 28 Explanation and Knowledge Acquisition in Expert Systems
(McDermott)
M Apr 2 A Production System for Problem Solving: SOAR2 (Laird)
W Apr 4 Integrating Expert-System Tools with SRL (KAS, PSRL, PDS)
(Rychener)
M Apr 9 Additional Expert System Tools: EMYCIN, HEARSAY-III, ROSIE,
LOOPS, KEE (Rosenbloom)
W Apr 11 A Modifiable Production-System Architecture: PRISM (Langley)
M Apr 16 (additional topics open to negotiation)
------------------------------
Date: 9 Jan 1984 1238:48-EST
From: Lori Levin <LEVIN@CMU-CS-C.ARPA>
Subject: Linguistics Course
[Reprinted from the CMUC bboard.]
NATURAL LANGUAGE SYNTAX FOR COMPUTER SCIENTISTS
FRIDAYS 10:00 AM - 12:00
4605 Wean Hall
Lori Levin
Richmond Thomason
Department of Linguistics
University of Pittsburgh
This is an introduction to recent work in generative syntax. The
course will deal with the formalism of some of the leading syntactic
theories as well as with methodological issues. Computer scientists
find the formalism used by syntacticians easy to learn, and so the
course will begin at a fairly advanced level, though no special
knowledge of syntax will be presupposed.
We will begin with a sketch of the "Standard Theory," Chomsky's
approach of the mid-60's from which most of the current theories have
evolved. Then we will examine Government-Binding Theory, the
transformational approach now favored at M.I.T. Finally, we will
discuss in more detail two nontransformational theories that are more
computationally tractable and have figured in joint research projects
involving linguists, psychologists, and computer scientists:
Lexical-Functional Grammar and Generalized Context-Free Phrase
Structure Grammar.
------------------------------
End of AIList Digest
********************
∂10-Jan-84 1348 LEISER@SRI-AI.ARPA DIABLO HOOKUP IN VENTURA 7
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 13:47:54 PST
Date: Tue 10 Jan 84 13:46:16-PST
From: Michele <LEISER@SRI-AI.ARPA>
Subject: DIABLO HOOKUP IN VENTURA 7
To: csli-folks@SRI-AI.ARPA
******************************************************************************
The Diablo printer located in Room 7 of Ventura Hall is now connected to a
business line (not CENTREX).
You need not dial the "9" prefix in future!
--------------------
Also, instructions have been posted in Room 7 regarding IMAGEN/CSLIspooler
problems. Eventually, fuller documentation will be available at all printer
locations.
Thank you.
******************************************************************************
-------
∂10-Jan-84 1513 WUNDERMAN@SRI-AI.ARPA Dr. Mitch Waldrop's Visit
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 15:13:25 PST
Date: Tue 10 Jan 84 15:13:17-PST
From: WUNDERMAN@SRI-AI.ARPA
Subject: Dr. Mitch Waldrop's Visit
To: CSLI-Principals@SRI-AI.ARPA
cc: Wunderman@SRI-AI.ARPA
This Thursday and Friday, 1/12-13, Mitch Waldrop from Science Magazine
will be at CSLI to learn about our research projects in preparation for
an article he is writing on A-I, natural language, vision and robotics.
He will be here for the usual Thursday activities, then will meet with
individual researchers on Fri. to do more "in-depth" interviews. There
are still some times available for him to meet with you, if you are
interested. Send me a message or call 497-1131 with your time preference.
Thanks.
--Pat W.
-------
∂10-Jan-84 1618 @SU-SCORE.ARPA:JMC@SU-AI industrial lectures
Received: from SU-SCORE by SU-AI with TCP/SMTP; 10 Jan 84 16:18:21 PST
Received: from SU-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 10 Jan 84 16:08:08-PST
Date: 10 Jan 84 1607 PST
From: John McCarthy <JMC@SU-AI>
Subject: industrial lectures
To: faculty@SU-SCORE, su-bboards@SU-AI
The faculty has voted to continue the program next year. Please
encourage applications to teach a one quarter course. The application
should consist of a course description suitable for inclusion in
the Stanford catalog together with as much vita as the applicant
wishes considered. Payment will be 1/16 of the lecturers annual
salary with a maximum of $3,000 for a one quarter course.
∂10-Jan-84 1640 LEISER@SRI-AI.ARPA INTRODUCTION TO THE DEC 2060
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 16:39:59 PST
Date: Tue 10 Jan 84 16:36:35-PST
From: Michele <LEISER@SRI-AI.ARPA>
Subject: INTRODUCTION TO THE DEC 2060
To: csli-folks@SRI-AI.ARPA
******************************************************************************
Documentation entitled "INTRODUCTION TO THE DEC" is now available from the
CSLI Computer Facility, Room 42 Casita (behind Ventura Hall).
This is truly a beginner's guide to interaction with the DEC-2060 computer.
The following topic's are discussed:
* EXEC level commands
* Control characters
* File nomenclature/protection/retention
* Customization of LOGIN and LOGOUT files
* Special programs, including EMACS and MM.
If you need a copy for your office, please contact Michele Leiser at
497-2607 or send mail to LEISER@sri-ai.
Suggestions for further documentation, help files or classes will be
gratefully accepted!
Thank you.
******************************************************************************
-------
∂10-Jan-84 1642 EMMA@SRI-AI.ARPA Directory
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 16:42:21 PST
Date: Tue 10 Jan 84 16:41:04-PST
From: Emma Pease <EMMA@SRI-AI.ARPA>
Subject: Directory
To: csli-folks@SRI-AI.ARPA
A preliminary draft of the directory is now available in
<emma>scratch.dir and can be requested from Emma@sri-ai. Please send
me any comments and corrections.
Emma
-------
∂10-Jan-84 1748 MOLENDER@SRI-AI.ARPA Talk on data types
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 17:48:14 PST
Date: Tue 10 Jan 84 17:46:55-PST
From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
Subject: Talk on data types
To: CSLI-Friends@SRI-AI.ARPA
SPEAKER: Dr. Peter Pepper, Institut fuer Informatik, Technischen
Universitaet, Munich
SUBJECT: ``ALGEBRAIC DATA TYPES''
PLACE: EK242
TIME: 4:15pm
Dr. Peter Pepper, from the Institut fuer Informatik of the Technischen
Universitaet in Munich, will give a talk on Wednesday, January 18, at
4:15pm in the AIC Conference Room, EK242, on ``Implementation of
Algebraic Data Types.'' Abstract to follow later.
-------
∂10-Jan-84 1801 STAN@SRI-AI.ARPA Course Announcement
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 18:01:45 PST
Date: 10 Jan 1984 1748-PST
From: Stan at SRI-AI
Subject: Course Announcement
To: CSLI-folks:
Course Announcement: CS 400B
Time: Wednesdays, 3:15-5:00
Starting: January. 11
Place: Room 352, Margaret Jacks Hall
THEORETICAL ASPECTS OF ROBOT COGNITION AND ACTION
Stan Rosenschein
Artificial Intelligence Center
SRI International
This course will review fundamental theoretical problems in the design
of artifacts which sense and affect complex environments. The focus of
the course will be on the use of concepts from symbolic logic and
theoretical computer science to rigorously characterize the notion of
a rational cognitive agent. In particular, the course will
investigate the role of knowledge, belief, desire, intention,
planning, and action from several points of view: (1) their formal
properties as studied in idealized models abstracted from common sense,
(2) their respective roles in allowing an organism to carry out
complex purposive behavior, and (3) various suggested computational
realizations. The course will attempt to unify these topics, suggest
directions for an integrated theory of robot action, and indicate how
such a theory might be applied to concrete problems in AI.
-------
∂10-Jan-84 2046 @SRI-AI.ARPA:keisler@wisc-rsch Re: Talk on data types
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Jan 84 20:46:26 PST
Received: from wisc-rsch.ARPA by SRI-AI.ARPA with TCP; Tue 10 Jan 84 20:46:11-PST
Date: Tue, 10 Jan 84 22:45:16 cst
From: keisler@wisc-rsch (Jerry Keisler)
Message-Id: <8401110445.AA15480@wisc-rsch.ARPA>
Received: by wisc-rsch.ARPA (4.12/3.7)
id AA15480; Tue, 10 Jan 84 22:45:16 cst
To: MOLENDER@SRI-AI.ARPA
Subject: Re: Talk on data types
Cc: CSLI-Friends@SRI-AI.ARPA
r
$
∂11-Jan-84 0915 BERG@SU-SCORE.ARPA textbooks
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Jan 84 09:15:40 PST
Date: Wed 11 Jan 84 09:10:33-PST
From: Kathy Berg <BERG@SU-SCORE.ARPA>
Subject: textbooks
To: CSD-Faculty: ;
cc: berg@SU-SCORE.ARPA
Stanford-Phone: (415) 497-4776
Textbook orders for spring quarter are due. The request form was put
in your boxes on December 30. I must receive your lists a s a p, or
the bookstore will be unable to have the textbooks on the shelves for
the start of next quarter.
Please put your lists in my box. Your cooperation would be greatly
appreciated.
Kathy
-------
∂11-Jan-84 1203 @SRI-AI.ARPA:halvorsen.pa@PARC-MAXC.ARPA Re: Directory
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Jan 84 12:03:29 PST
Received: from PARC-MAXC.ARPA by SRI-AI.ARPA with TCP; Wed 11 Jan 84 12:02:06-PST
Date: 11 Jan 84 11:59 PST
From: halvorsen.pa@PARC-MAXC.ARPA
Subject: Re: Directory
In-reply-to: Emma Pease <EMMA@SRI-AI.ARPA>'s message of Tue, 10 Jan 84
16:41:04 PST
To: EMMA@SRI-AI.ARPA
cc: csli-folks@SRI-AI.ARPA
I would like a copy of the directory.
Thanks,
Kris
∂11-Jan-84 1220 STAN@SRI-AI.ARPA A reminder
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Jan 84 12:19:58 PST
Date: 11 Jan 1984 1216-PST
From: Stan at SRI-AI
Subject: A reminder
To: CSLI-folks:
The Foundations of Situated Language seminar for the winter quarter
will deal with practical reasoning as studied in AI and philosophy.
The goal of the seminar is to develop an understanding of the relation
between traditional issues and problems in philosophy that go by the
name of "practical reasoning" and computational approaches studied in
AI. To reach this goal we will read and closely analyze a small
number of classic papers on the subject.
The seminar will not be a colloquium series, but a working seminar in
which papers are distributed and read in advance. The first meeting
will be held on Jan. 12 in the Ventura Hall seminar room.
Tentative schedule:
Thurs. Jan. 12 Michael Bratman
"A partial overview of some philosophical work
on practical reasoning"
Thurs. Jan. 19 Kurt Konolige
Presentation of "Application of Theorem Proving to
Problem Solving," (C. Green), sections 1-5
Thurs. Jan. 26 John Perry
A philosopher grapples with the above
Later in the seminar we will discuss:
"STRIPS: A New Approach to the Application of Theorem Proving to
Problem Solving," (R. Fikes and N. Nilsson)
"The Frame Problem and Related Problems in Artificial Intelligence,"
(P. Hayes)
A philosophical paper on practical reasoning, to be selected.
-------
∂11-Jan-84 1421 GOLUB@SU-SCORE.ARPA Resident Fellow
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Jan 84 14:21:20 PST
Date: Wed 11 Jan 84 14:19:35-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: Resident Fellow
To: faculty@SU-SCORE.ARPA
The Resident Fellow positions are now being advertised. If you
are interested, Elyse has the detailed information.
GENE
-------
∂11-Jan-84 1507 EMMA@SRI-AI.ARPA Course:Philosophy 266
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Jan 84 15:07:03 PST
Date: Wed 11 Jan 84 15:04:43-PST
From: Emma Pease <EMMA@SRI-AI.ARPA>
Subject: Course:Philosophy 266
To: csli-friends@SRI-AI.ARPA
Philosophy 266 TOPICS IN PHILOSOPHICAL LOGIC
By: Johan van Benthem
Time: Tuesdays 13:15-15:05
Place: Bldg. 90 rm. 92Q (philosophy bldg.) (see bottom)
This course consists of an introduction to `classical' intensional
logic, followed by a presentation of some current trends in this area
of research. In the introductory part, examples will be drawn from
the logic of tense, modality and conditionals. Current trends to be
presented are the transition from `total' to `partial' models, the
various uses of the generalized quantifier perspective, and newer
`dynamic' accounts of semantic interpretation.
Intensional logic has various aspects: philosophical, mathematical
and linguistical. In particular, this course provides some broader
logical back-ground for those interested in the semantics of natural
language.
*Starting Jan. 17, this will be changed to:
Tuesdays, 11:30-13:15
Seminar room, CSLI, Ventura Hall.
-------
∂11-Jan-84 2024 DKANERVA@SRI-AI.ARPA CSLI Activities Schedule for Thursday, January 12, 1984
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Jan 84 20:24:12 PST
Date: Wed 11 Jan 84 20:23:46-PST
From: DKANERVA@SRI-AI.ARPA
Subject: CSLI Activities Schedule for Thursday, January 12, 1984
To: csli-friends@SRI-AI.ARPA
This Thursday's CSLI Newsletter is so packed with course
announcements and other news that I haven't been able to finish
putting it together in time for you to have it for tomorrow's
activities. Here is this Thursday's schedule--the actual
Newsletter will follow tomorrow.
** SCHEDULE FOR THIS THURSDAY, JANUARY 12, 1984 **
10:00 a.m. Seminar on Foundations of Situated Language
Ventura Hall "An Overview of Practical Reasoning"
Conference Room by Michael Bratman
12 noon TINLunch
Ventura Hall "Linguistic Modality Effects on Fundamental
Conference Room Frequency in Speech"
by Douglas O'Shaughnessy and Jonathan Allen
Discussion led by Marcia Bush
2:15 p.m. Seminar on Situation Semantics
Redwood Hall by Jon Barwise
Rm G-19
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall "From Pixels to Predicates: Vision Research
Rm G-19 in the Tradition of David Marr"
by Sandy Pentland, SRI
* * * * *
-------
∂12-Jan-84 0229 RESTIVO@SU-SCORE.ARPA PROLOG Digest V2 #2
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Jan 84 02:29:00 PST
Date: Wednesday, January 11, 1984 11:14PM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V2 #2
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Thursday, 12 Jan 1984 Volume 2 : Issue 2
Today's Topics:
Puzzle - Marcel's Lamp & Jobs
----------------------------------------------------------------------
Date: Tue 10 Jan 84 00:03:22-MST
From: Uday Reddy <U Reddy@Utah-20>
Subject: Marcel's Dilemma
Ref: Marcel Schoppers, Prolog Digest, 2, 1, Jan 7, 84
The problem cited by Marcel is an example of what happens if one
tries to translate English into a formal language (here Prolog)
without regard to the semantic objects involved. His problem is
"Suppose you are shown two lamps, 'a' and 'b', and you are told
that, at any time,
1. at least one of 'a' or 'b' is on.
2. whenever 'a' is on, 'b' is off.
3. each lamp is either on or off."
Marcel tried to express these constraints in Prolog, by defining
a predicate 'on'. Such an exercise is meaningless. However you
specify 'on', what answers can you expect for the goal ?- on(X).
Either 'a', or 'b' or both of them. None of these results captures
the constraints above.
What is 'on' anyway? It is a predicate on {a,b}. Do the above
constraints specify what 'on' is ? No. They specify what 'on'
can be. In other words, they specify the properties the predicate
'on' should satisfy. So, what we need is a predicate, say 'valid',
which specifies whether a particular 'on' predicate is valid or
not. It is, clearly, a higher-order predicate. Forgetting about
operational semantics of higher-order logic, it may be specified
as
valid(on) :- (on(a) ; on(b)), (on(a) -> not(on(b)))
Reworking this into Prolog would correspond to what Marcel calls
"dumb-search-and-filter" solution. I wonder what he means by
"dumb". If what you are trying to do is express a set of axioms
and use Prolog to obtain all possible solutions of the axioms,
that is what you get. If you are not satisfied with search-and
filter, you have to do more than merely express a set of axioms.
You have to synthesize an algorithm. Don't expect black magic
from Prolog.
-- Uday Reddy
------------------------------
Date: 7-Jan-84 17:50:53-CST (Sat)
From: Gabriel@ANL-MCS (John Gabriel)
Subject: The Lamps Puzzle.
Here is a solution to the "lamps" problem posed in Vol 2, #1.
I am not sure if the setter will agree that it complies with
the condition "no generate and search", and therein lies its
real interest.
First a few notes about knowledge representation. I have
chosen to use a formalism where signal names "a" or "b" are
bound to values "on" or "off", using a predicate
signal(NAME,VALUE).
This trick allows me to do the equivalent of asking about
VALUE(NAME), without becoming entangled in apparent second
order calculus. Database people or users of the McCabe Prolog
will recognise this as simply working with triplets [signal,
VALUE,NAME] instead of doublets [VALUE,NAME]. The job could
equally well be done using lists or sets.
Second, about generate and search. There seem to me to be two
issues, first the presence of backtracking blurs the distinction
between "conjunction of goals" and "generate and search" so much
that one might argue the only non "generate and search" solution
was one that never backtracks. The two predicates
state([a,on],[b,off]).
state([a,off],[b,on]).
certainly meet this criterion but demand fairly extensive knowledge
transformation by the programmer to reach them from the problem as
stated. If we do not allow this, then any reasonable solution seems
to me to require a conjunction of goals essentially the same as the
problem statement, and inevitably causes a generate and search by
backtracking. So the question becomes "Can we arrange the goals
so as to minimise or eliminate backtracking ?" Experience here using
'Jobs' puzzles as tests for resolution based theorem provers
suggests the answer is at best "We cannot always eliminate all
backtracking without more powerful resolution methods than are used
by Prolog, and perhaps not even then. But judicious reformulation
of the problem in Prolog by reordering goals alone can gain a
factor of 30 in execution speed for a program in CProlog. In fact
an interesting offshoot of that observation is a project to
"compile" jobs problems in natural language to optimal Prolog.
I have not embarked on this, but I do have some similar work in
progress compiling build specifications for logic circuits to
rules determining the I/O behaviour of the system.
Here is a solution with a "conjunction of goals and backtracking":-
/* signal binds a name to a value, E.g. signal(a,on) says the
signal a has value on. The following two predicates ensure
that signals take values off and on */
signal(←,on).
signal(←,off).
condition(A,B):- /* applies the conditions of the problem */
signal(←,A),
signal(←,B), /* each signal must be off or on */
(([A,B] = [off,on]) ; ([A,B] = [on,off])), /* if A is off,
B is on etc. */
((A = on) ; (B = on)). /* one of A or B is on */
state([a,A],[b,B]):- /* valid system states */
condition(A,B).
Here is another more elegant and concise solution:-
valid(on).
valid(off).
state([a,A],[b,B]):-
valid(A),
valid(B),
or((A = on), (B = on)),
ifthen((A = on), (B = off)).
or(X,Y):- X,!.
or(X,Y):- Y.
ifthen(X,Y):- /* not(X) or Y */
not(call(X)),
!.
ifthen(X,Y):-
call(Y).
Perhaps the Jobs Puzzle and two of the solutions may be of
interest. I am indebted to Linda Mazur for the puzzle which
is part of a collection of similar material from the literature
on logic.
- Four ladies meet regularly to play cards, each has one and
only one job. Their names are:- Alice, Betty, Carol and Dorothy;
the jobs are pilot, lifeguard, housewife, and professor.
- At one meeting the colors of their dresses were pink, yellow,
blue and white.
- The pilot and Carol played bridge with the ladies in pink and
blue dresses.
- Betty always beats the lifeguard when canasta is played.
- Alice and the professor both envy the lady in blue, who is
not the housewife, since the housewife always wears white.
- Who has which job, and what dress was each lady wearing on
the day of the bridge game mentioned above.
facts:- /* facts for Jobs Puzzle */
/* jobs and dress collors for alice,betty,carol,dorothy */
person(alice,AJOB,ADRESS),
person(betty,BJOB,BDRESS),
not(AJOB=BJOB),not(ADRESS=BDRESS),
person(carol,CJOB,CDRESS),
not(AJOB=CJOB),not(ADRESS=CDRESS),not(BJOB=CJOB),not(BDRESS=CDRESS),
person(dorothy,DJOB,DDRESS),
not(AJOB=DJOB),not(ADRESS=DDRESS),not(BJOB=DJOB),not(BDRESS=DDRESS),
not(CJOB=DJOB),not(CDRESS=DDRESS),
/* set out the bindings */
BINDING=[[alice,AJOB,ADRESS],
[betty,BJOB,BDRESS],
[carol,CJOB,CDRESS],
[dorothy,DJOB,DDRESS]],
not(BJOB=lifeguard), /* cnasta*/
not(CJOB=pilot), /*bridge - carol isnt pilot */
not(CDRESS=pink), /* bridge-carol plays w/ lady in pink */
not(CDRESS=blue), /* bridge */
not(AJOB=professor),
not(ADRESS=blue), /* both from data on envy*/
not(member(BINDING,[←,pilot,pink])),
not(member(BINDING,[←,pilot,blue])),
not(member(BINDING,[←,professor,blue])),
not(member(BINDING,[←,housewife,blue])),
member(BINDING,[←,housewife,white]),
nl,write(alice),cma,write(AJOB),cma,write(ADRESS),
nl,write(betty),cma,write(BJOB),cma,write(BDRESS),
nl,write(carol),cma,write(CJOB),cma,write(CDRESS),
nl,write(dorothy),cma,write(DJOB),cma,write(DDRESS).
cma:- write(',').
jobs([housewife,pilot,professor,lifeguard]). /*valid jobs */
names([alice,betty,carol,dorothy]). /*valid names */
colors([pink,blue,yellow,white]).
person(NAME,JOB,COLOR):- /*attributes of a person */
names(NAMES),
jobs(JOBS),
colors(COLORS),
member(NAMES,NAME),
member(JOBS,JOB),
member(COLORS,COLOR).
/* service routines follow to test if all members of a list are
distinct */
/* distinct fails unless all members of a list are distinct,
non-variable */
distinct([]). /* if we processed the whole list w/o failure we
succeed */
distinct([H|T]):-
nonvar(H), /* Is H instantiated? */
not(member(T,H)), /* Is H present in the rest of the list */
/* if we got to here H is distinct from any element in T
the remainder of the list, now repeat for rest of list */
distinct(T).
member([],ITEM):- fail. /* if we got to empty list, then fail */
/* ITEM is in a list either if it is the first item in the list
I.e. H of [H|T] or in T, the rest of the list */
member([H|T],ITEM):-
((H = ITEM) ; member(T,ITEM)).
Here is a faster solution:-
facts:-
possible(alice,AJOB,ADRESS),
possible(betty,BJOB,BDRESS),
APROP=[AJOB,ADRESS],
BPROP=[BJOB,BDRESS],
nident(APROP,BPROP),
possible(carol,CJOB,CDRESS),
CPROP=[CJOB,CDRESS],
nident(APROP,CPROP),
nident(BPROP,CPROP),
possible(dorothy,DJOB,DDRESS),
DPROP=[DJOB,DDRESS],
nident(APROP,DPROP),
nident(BPROP,DPROP),
nident(CPROP,DPROP),
nl,write(alice),cma,write(APROP),
nl,write(betty),cma,write(BPROP),
nl,write(carol),cma,write(CPROP),
nl,write(dorothy),cma,write(DPROP).
cma:- write(',').
possible(NAME,JOB,DRESS):-
member([alice,betty,carol,dorothy],NAME),
member([yellow,pink,blue,white],DRESS),
member([pilot,lifeguard,professor,housewife],JOB),
ifthen((JOB=housewife),(DRESS=white)),
ifthen((DRESS=white),(JOB=housewife)),
nand(NAME=betty,JOB=lifeguard),
nand(NAME=carol,JOB=pilot),
nand(NAME=carol,DRESS=pink),
nand(NAME=alice,JOB=professor),
nand(NAME=carol,DRESS=blue),
nand(NAME=alice,DRESS=blue),
nand(JOB=pilot,DRESS=blue),
nand(JOB=professor,DRESS=blue),
nand(JOB=housewife,DRESS=blue).
nand(X,Y):-
call(X),
call(Y),
!,fail.
nand(←,←).
ifthen(X,Y):- /* (not X) or Y */
not(call(X)).
ifthen(X,Y):-
call(Y).
nident([],[]).
nident([H1|T1],[H2|T2]):-
not(H1=H2),
nident(T1,T2).
member([],ITEM):- fail. /* if we got to empty list, then fail */
/* ITEM is in a list either if it is the first item in the list
i.e. H of [H|T] or in T, the rest of the list */
member([H|T],ITEM):-
((H = ITEM) ; member(T,ITEM)).
Those readers who worked on the Tigers puzzle of the last issue
of Vol.1 may be interested to try these techniques for that case.
Also E. Sacerdoti in his book on "Planner" type problems makes
some interesting points about a plan being a solution to a
constrained conjunction of goals, and the continuum from generate
and test to deterministic solution. If one can generate a nearly
deterministic solution by "compilation", it seems to me that matches
some forms of human reasoning quite well too.
-- John Gabriel
------------------------------
End of PROLOG Digest
********************
∂12-Jan-84 0913 KJB@SRI-AI.ARPA Letter to Charlie Smith
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Jan 84 09:13:31 PST
Date: Thu 12 Jan 84 09:08:03-PST
From: Jon Barwise <KJB@SRI-AI.ARPA>
Subject: Letter to Charlie Smith
To: csli-principals@SRI-AI.ARPA
cc: dkanerva@SRI-AI.ARPA
The end of the year letter is on <dkanerva>report.dk . If you
have a chance, read it and send comments to Betsy and/or Dianne.
We plan to mail it out tomorrow morning. Thanks, Jon
-------
∂12-Jan-84 1047 WUNDERMAN@SRI-AI.ARPA Visit by Mitch Waldrop
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Jan 84 10:46:57 PST
Date: Thu 12 Jan 84 10:45:11-PST
From: WUNDERMAN@SRI-AI.ARPA
Subject: Visit by Mitch Waldrop
To: CSLI-Folks@SRI-AI.ARPA
This Thursday and Friday, 1/12-13, Dr. Mitch Waldrop from Science
Magazine, Washington DC, will be here to observe our activities,
meet with various researchers, and gather material for an in-depth
article he is writing on A-I, natural language, vision and robotics.
Feel free to introduce yourself to him, or contact me to arrange a
time when you can meet with him individually. Thanks.
--Pat W.
-------
∂12-Jan-84 1244 MOLENDER@SRI-AI.ARPA Talk on algebraic data types, 1/18, 4:15pm, EK242
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Jan 84 12:43:11 PST
Date: Thu 12 Jan 84 12:39:45-PST
From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
Subject: Talk on algebraic data types, 1/18, 4:15pm, EK242
To: CSLI-Friends@SRI-AI.ARPA,
AIC-Associates: ;
SPEAKER: Dr. Peter Pepper, Institut fuer Informatik, Technischen
Universitaet, Munich
SUBJECT: ``Implementations of Abstract Data Types and Their
Correctness ''
PLACE: AIC Conference Room, EK242
DATE: Wednesday, January 18, 1984
TIME: 4:15pm
ABSTRACT
Abstract data types have become a major tool for the specification of
software products. However, there remains the issue of deriving
implementations from given specifications. Fortunately, a great part
of this process can be done within the framework of algebraic
specification techniques.
We discuss a few operations on abstract data types that can be used to
derive implementations. Moreover, the notion of ``equivalence'' of
types based on observable behaviour is investigated, which leads to a
notion of correctness of type implementations.
-------
∂12-Jan-84 1600 JF@SU-SCORE.ARPA number theory seminar
Received: from SU-SCORE by SU-AI with TCP/SMTP; 12 Jan 84 16:00:01 PST
Date: Thu 12 Jan 84 15:58:19-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: number theory seminar
To: aflb.local@SU-SCORE.ARPA
the two times that have been proposed for a computational number theory
seminar are
tuesdays, around 1 p.m.
thursdays, after aflb, around 2 p.m.
please let me know what you think of these times. if you cannot make it at
either of those times but want to attend the seminar, suggest an alternate
time. i cannot start next tuesday (i have to make an urgent visit to Squaw
Valley), but i could start next thursday. as soon as we have a consensus on
the time, i will book a room and arrange the first talk.
thanks,
joan
-------
∂12-Jan-84 1623 LEISER@SRI-AI.ARPA TERMINAL SHUT-OFF
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Jan 84 16:23:40 PST
Date: Thu 12 Jan 84 16:22:01-PST
From: Michele <LEISER@SRI-AI.ARPA>
Subject: TERMINAL SHUT-OFF
To: csli-folks@SRI-AI.ARPA
******************************************************************************
It has come to my attention that some of you are turning off your terminals
each night in a power-conservation effort.
While I commend your environmentally-raised consciousness, may I ask that you
simply turn the contrast knob as low as possible and leave the power on each
weeknight. Fridays (or when you leave for two or three days) you may turn them
off.
Thank you!
******************************************************************************
-------
∂13-Jan-84 0828 CLT SEMINAR at SRI
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Dr. Peter Pepper, Institut fuer Informatik, Technischen
Universitaet, Munich
SUBJECT: ``Implementations of Abstract Data Types and Their
Correctness ''
PLACE: AIC Conference Room, EK242
DATE: Wednesday, January 18, 1984
TIME: 4:15pm
ABSTRACT
Abstract data types have become a major tool for the specification of
software products. However, there remains the issue of deriving
implementations from given specifications. Fortunately, a great part
of this process can be done within the framework of algebraic
specification techniques.
We discuss a few operations on abstract data types that can be used to
derive implementations. Moreover, the notion of ``equivalence'' of
types based on observable behaviour is investigated, which leads to a
notion of correctness of type implementations.
-------
Those outside SRI can assemble in the lobby of the engineering building
(on ravenswood opposite the pine street intersection) to be escorted in.
-- From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
-- Via: Richardinger <WALDINGER@SRI-AI.ARPA>
∂13-Jan-84 1018 @SU-SCORE.ARPA:CMILLER@SUMEX-AIM.ARPA HPP ID MAIL SERVICE
Received: from SU-SCORE by SU-AI with TCP/SMTP; 13 Jan 84 10:17:54 PST
Received: from SUMEX-AIM.ARPA by SU-SCORE.ARPA with TCP; Fri 13 Jan 84 10:12:06-PST
Date: Fri 13 Jan 84 09:52:54-PST
From: Carole Miller <CMILLER@SUMEX-AIM.ARPA>
Subject: HPP ID MAIL SERVICE
To: HPP@SUMEX-AIM.ARPA, SUMEX-STAFF@SUMEX-AIM.ARPA, ADMIN@SU-SCORE.ARPA,
FACULTY@SU-SCORE.ARPA, BERG@SU-SCORE.ARPA, BERGMAN@SU-SCORE.ARPA,
OSTROV@SU-SCORE.ARPA
As of this morning, we finally have ID mail service to Welch Road.
Our ID mail address is: HPP/701 Welch Rd - Bldg C.
Carole
-------
∂13-Jan-84 1425 @SU-SCORE.ARPA:reid@Glacier improvements to Terman Auditorium
Received: from SU-SCORE by SU-AI with TCP/SMTP; 13 Jan 84 14:25:41 PST
Received: from Glacier by SU-SCORE.ARPA with TCP; Fri 13 Jan 84 14:23:26-PST
Date: Friday, 13 January 1984 14:21:50-PST
From: Brian Reid <reid@Glacier>
Subject: improvements to Terman Auditorium
To: csl-faculty@Sierra, faculty@Score
I am teaching in Terman Auditorium for the first time this quarter, and
I am amazed at what an unfriendly room it is for teaching a class.
I intend to make some specific suggestions to the TV Network people for
how the room can be made more suitable for lecturing.
These suggestions will of course involve money. I think that there is a
much better chance of the TV Network being able to get that money if
there is evidence of faculty support for the improvement of that room.
I would like to know how many other people share my dislike of Terman
Aud as a place to teach, and how many of you would be willing to go on
record as being in favor of upgrading it.
Brian
∂13-Jan-84 1447 @SRI-AI.ARPA:CLT@SU-AI SEMINAR at SRI
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Jan 84 14:46:40 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Fri 13 Jan 84 08:33:19-PST
Date: 13 Jan 84 0828 PST
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR at SRI
To: "@DIS.DIS[1,CLT]"@SU-AI
SPEAKER: Dr. Peter Pepper, Institut fuer Informatik, Technischen
Universitaet, Munich
SUBJECT: ``Implementations of Abstract Data Types and Their
Correctness ''
PLACE: AIC Conference Room, EK242
DATE: Wednesday, January 18, 1984
TIME: 4:15pm
ABSTRACT
Abstract data types have become a major tool for the specification of
software products. However, there remains the issue of deriving
implementations from given specifications. Fortunately, a great part
of this process can be done within the framework of algebraic
specification techniques.
We discuss a few operations on abstract data types that can be used to
derive implementations. Moreover, the notion of ``equivalence'' of
types based on observable behaviour is investigated, which leads to a
notion of correctness of type implementations.
-------
Those outside SRI can assemble in the lobby of the engineering building
(on ravenswood opposite the pine street intersection) to be escorted in.
-- From: Margaret Olender <MOLENDER@SRI-AI.ARPA>
-- Via: Richardinger <WALDINGER@SRI-AI.ARPA>
∂13-Jan-84 1607 @SRI-AI.ARPA:brian%Psych.#Pup@SU-SCORE.ARPA Perception, Language and Cognition
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Jan 84 16:07:08 PST
Received: from SU-SCORE.ARPA by SRI-AI.ARPA with TCP; Fri 13 Jan 84 16:01:12-PST
Received: from Psych by Score with Pup; Fri 13 Jan 84 15:59:15-PST
Date: 13 Jan 1984 16:00:06-PST
From: brian at SU-Tahoma
To: csli-friends@sri-ai at score, dkanerva@sri-ai at score
Subject: Perception, Language and Cognition
Issues in Perception, Language and Cognition (Psych 279)
A reminder about our first talk of the quarter:
WHEN: Monday January 16, noon to 1:15
WHERE: Jordan Hall (Psychology) room 100
WHO: Professor George Sperling
NYU Psychology Dept. and the Bell Laboratories, Murray Hill.
WHAT: The Logic of Perception
ABSTRACT
------------
The logic of perception involves using unreliable, ambiguous information to
arrive at a categorical decision. The talk will emphasize concepts and
examples; an illustrative movie will be shown. The prototypical phenomenon
is multiple stable states in response to the same external stimulus,
together with path dependence, usually in the form of hysteresis. The
mathematical description is in terms of potential theory (energy wells,
etc) or catastrophe theory. Neural models with local inhibitory
interaction are proposed to account for these phenomena; these models are
the antecedents of contemporary relaxation methods used in computer vision.
New (and old) examples are provided from binocular vision and depth
perception, including a practical demonstration of how the perceptual
decision of 3D structure in a 2D display can be controlled by an
(irrelevant) brightness cue.
-------
Our speaker next time will be Dr. Dave Nagel, Associate Director of Life
Sciences, NASA-Ames. His talk will be entitled "Decisions and Automation."
∂13-Jan-84 1613 @SRI-AI.ARPA:brian%Psych.#Pup@SU-SCORE.ARPA Perception, Language and Cognition
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Jan 84 16:13:34 PST
Received: from SU-SCORE.ARPA by SRI-AI.ARPA with TCP; Fri 13 Jan 84 16:01:12-PST
Received: from Psych by Score with Pup; Fri 13 Jan 84 15:59:15-PST
Date: 13 Jan 1984 16:00:06-PST
From: brian at SU-Tahoma
To: csli-friends@sri-ai at score, dkanerva@sri-ai at score
Subject: Perception, Language and Cognition
Issues in Perception, Language and Cognition (Psych 279)
A reminder about our first talk of the quarter:
WHEN: Monday January 16, noon to 1:15
WHERE: Jordan Hall (Psychology) room 100
WHO: Professor George Sperling
NYU Psychology Dept. and the Bell Laboratories, Murray Hill.
WHAT: The Logic of Perception
ABSTRACT
------------
The logic of perception involves using unreliable, ambiguous information to
arrive at a categorical decision. The talk will emphasize concepts and
examples; an illustrative movie will be shown. The prototypical phenomenon
is multiple stable states in response to the same external stimulus,
together with path dependence, usually in the form of hysteresis. The
mathematical description is in terms of potential theory (energy wells,
etc) or catastrophe theory. Neural models with local inhibitory
interaction are proposed to account for these phenomena; these models are
the antecedents of contemporary relaxation methods used in computer vision.
New (and old) examples are provided from binocular vision and depth
perception, including a practical demonstration of how the perceptual
decision of 3D structure in a 2D display can be controlled by an
(irrelevant) brightness cue.
-------
Our speaker next time will be Dr. Dave Nagel, Associate Director of Life
Sciences, NASA-Ames. His talk will be entitled "Decisions and Automation."
∂13-Jan-84 1627 DKANERVA@SRI-AI.ARPA Newsletter No. 14, January 12, 1984
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Jan 84 16:24:56 PST
Date: Thu 12 Jan 84 19:27:39-PST
From: DKANERVA@SRI-AI.ARPA
Subject: Newsletter No. 14, January 12, 1984
To: csli-folks@SRI-AI.ARPA
CSLI Newsletter
January 12, 1984 * * * Number 14
REORGANIZATION OF NATURAL LANGUAGE AREA IN CSLI
After consultation with the Advisory Panel and the Executive
Committee, I have decided that there should only be one natural
language area, not two. I have asked John Perry and Betsy Macken to
work with people in the old Areas A and B to make recommendations as
to a combined NL Area. If you have any ideas about this, please speak
to them soon.
- Jon Barwise
* * * * * * *
VISITORS
This week, ROHIT PAHRIK, a logician from CCNY/Brooklyn, will be
in the area to give some talks at IBM. Unfortunately, we didn't find
out about his trip soon enough to schedule a CSLI-sponsored talk.
Since he would like to meet some of the CSLI people, as I'm sure many
of us would like to meet him, he'll be visiting CSLI Thursday
afternoon.
- John Etchemendy
DAVID ISRAEL, from BBN, will be visiting the week of January 16.
Anyone who wants a chance to meet with him should contact Sandy Riggs
(preferably by netmail, RIGGS@SRI-AI), who will arrange a schedule
later. No formal presentations are planned.
FRED DRETSKE, from the University of Wisconsin at Madison, will
be at Stanford January 19 and 20. He will be the CSLI Colloquium
speaker at 4:15 on Thursday, January 19, on the topic "Aspects of
Cognitive Representation." Dretske will be speaking also on Friday,
January 20, at the Philosophy Department Colloquium (3:15 p.m., Bldg.
90, Rm. 92Q). The title of that talk will be "Misrepresentation: How
to Get Things Wrong."
DAVID MCCARTY, of the Philosophy Department of Ohio State
University, will be at Stanford the week of January 23, giving talks
Tuesday through Thursday of that week. Abstracts of his talks and
details of time and place will be provided later. These talks will be
of interest especially to people in the area of computer languages.
* * * * * * *
CSLI MESSAGE FILE ON SU-AI SYSTEM AT STANFORD
Russ Greiner has set up a file at SAIL to receive all CSLI mail
directed to CSLI-FRIENDS. If you'd rather not fill your own mail file
at SAIL with CSLI news, just have your name removed from CSLI-FRIENDS
and read the file CSLI.TXT[2,2] at SAIL. If you set up a similar file
on another system, just send the name of the file to
CSLI-REQUESTS@SRI-AI, and all CSLI-FRIENDS mail will be sent to that
file.
* * * * * * *
! Page 2
* * * * * * *
SCHEDULE FOR *THIS* THURSDAY, JANUARY 12, 1984
10:00 a.m. Seminar on Foundations of Situated Language
Ventura Hall "An Overview of Practical Reasoning"
Conference Room by Michael Bratman
12 noon TINLunch
Ventura Hall "Linguistic Modality Effects on Fundamental
Conference Room Frequency in Speech"
by Douglas O'Shaughnessy and Jonathan Allen
Discussion led by Marcia Bush
2:15 p.m. Seminar on Situation Semantics
Redwood Hall by Jon Barwise
Rm G-19
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall "From Pixels to Predicates: Vision Research
Rm G-19 in the Tradition of David Marr"
by Sandy Pentland, SRI
* * * * * * *
SCHEDULE FOR *NEXT* THURSDAY, JANUARY 19, 1984
10:00 a.m. Seminar on Foundations of Situated Language
Redwood Hall Presentation of "Application or Theorem Proving
Rm G-19 to Problem Solving" (C. Green), secs. 1-5
by Kurt Konolige
12 noon TINLunch
Ventura Hall "A Short Companion to the Naive Physics
Conference Room Manifesto"
by David Israel, BBN Labs
(author present)
2:15 p.m. Seminar on Situation Semantics
Redwood Hall by Jon Barwise
Rm G-19
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall "Aspects of Cognitive Representation"
Rm G-19 by Fred Dretske, U. Wisconsin, Madison
* * * * * * *
! Page 3
* * * * * * *
SEMINAR ON THE FOUNDATIONS OF SITUATED LANGUAGE
The Seminar on the Foundations of Situated Language to be held
during the winter quarter will deal with practical reasoning as
studied in artificial intelligence and in philosophy. The goal of the
seminar is to develop an understanding of the relation between the
traditional issues and problems in philosophy that go by the name of
"practical reasoning" and the computational approaches studied in AI.
To reach this goal, we will read and closely analyze a small number of
classic papers on the subject.
The seminar will not be a colloquium series but a working seminar
in which papers are distributed and read in advance. The first
meeting will be held on Thursday, January 12, in the Ventura Hall
conference room.
SCHEDULE:
Thursday, Jan. 12 Michael Bratman
"A Partial Overview of Some Philosophical Work
on Practical Reasoning"
Thursday, Jan. 19 Kurt Konolige
Presentation of "Application of Theorem Proving
to Problem Solving" (C. Green), sections 1-5
Thursday, Jan. 26 John Perry
A philosopher grapples with the above.
Later in the seminar, we will discuss:
"STRIPS: A New Approach to the Application of Theorem Proving to
Problem Solving," (R. Fikes and N. Nilsson)
"The Frame Problem and Related Problems in Artificial Intelligence,"
(P. Hayes)
A philosophical paper on practical reasoning, to be selected.
* * * * * * *
TINLUNCH SCHEDULE
TINLunch will be held on each Thursday at Ventura Hall on the
Stanford University campus as a part of CSLI activities. Copies of
TINLunch papers are at SRI in EJ251 and at Stanford in Ventura Hall.
NEXT WEEK: "A Short Companion to the Naive Physics Manifesto"
by David Israel, BBN
January 12 Marcia Bush
January 19 David Israel (guest of Fernando Pereira)
January 26 Stanley Peters
* * * * * * *
! Page 4
* * * * * * *
CSLI SEMINAR ON SITUATION SEMANTICS
The afternoon CSLI seminar for the coming quarter is on situation
semantics. I will teach the first five weeks. Then we will have five
guest lecturers discussing applications to natural language. The book
"Situations and Attitudes" by Perry and me will be the text, but the
material will go beyond what is presented there. The seminar is a
course in the Philosophy Department, so students can get credit for
it. The seminar will meet Thursday afternoons at 2:15. The initial
meeting will be in Redwood Hall, but hopefully the seminar will be
small enough to move to the conference room in Ventura after a week or
two.
- Jon Barwise
* * * * * * *
CSLI COLLOQUIUM
Sandy Pentland of SRI will speak at the CSLI Colloquium this
Thursday, January 12, on vision research in AI and psychology. His
title is "From Pixels to Predicates: Vision Research in the Tradition
of David Marr." The Colloquium will be held as usual at 4:15 in the
Ventura Hall conference room.
NEXT WEEK: "Aspects of Cognitive Representation"
by Fred Dretske, U. Wisconsin, Madison
* * * * * * *
PROJECT C1 SEMINAR: SEMANTICS OF COMPUTER LANGUAGES
This is the tentative outline for the C1 Seminar, winter quarter
1984. The choice of topics reflects an intent to cover the
mathematical details of some basic topics in denotational semantics,
as well as some less basic topics that seem especially related to
situation semantics and other concerns of CSLI. Plotkin's notes will
be used for topics 2, 3, 4, and 5. The seminar will meet Tuesdays
from 10 a.m. to 12 noon, starting January 17th, in the Ventura Hall
seminar room. Volunteers for lectures, suggestions for other topics,
etc., should be sent to GOGUEN@SRI-AI and MESEGUER@SRI-AI.
1. Compositionality, abstract syntax, semantic algebras, and initial
algebra semantics
2. CPO's and basic constructions (CPO is Cartesian closed)
3,4. Solving domain equations (with examples)
5. Algebraic cpo's and computability
6. Abstract semantic algebras (Mosses)
7. Continuous algebras
Some topics may be eliminated if others require more time; in
addition, there will be at least one guest lecture, by David McCarty.
* * * * * * *
! Page 5
* * * * * *
FOUR LECTURES ON THE FORMALIZATION OF COMMONSENSE KNOWLEDGE
John McCarthy will give four lectures on the formalization of
commonsense knowledge. The lectures will be on Fridays at 3 p.m. The
first will be on Friday January 20 and will be held in the conference
room of the Center for the Study of Language and Information (CSLI)
conference room in Ventura Hall at Stanford.
1. The "situation calculus." Expression of the facts about the
effects of actions and other events in terms of a function result(e,s)
giving the new situation that arises when the event e occurs in the
situation s. The frame and qualification problems. Advantages and
disadvantages of various reifications.
2. The circumscription mode of nonmonotonic reasoning.
Mathematical properties and problems of circumscription. Applications
of circumscription to formalizing commonsense facts. Application to
the frame problem, the qualification problem and to the STRIPS
assumption.
3. Formalization of knowledge and belief. Modal and first-order
formalisms. Formalisms in which possible worlds are explicit objects.
Concepts and propositions as objects in theories.
4. Philosophical conclusions arising from AI work. Approximate
theories, second-order definitions of concepts, ascription of mental
qualities to machines.
The treatments given in the lectures are new, but the material is
related to the following papers.
McCarthy, John and P. J. Hayes (1969): "Some Philosophical Problems
from the Standpoint of Artificial Intelligence," in D. Michie (ed.),
"Machine Intelligence 4," American Elsevier, New York, NY.
McCarthy, John (1980): "Circumscription--A Form of Non-Monotonic
Reasoning," Artificial Intelligence, Volume 13, Numbers 1,2, April.
McCarthy, John (1977): "On The Model Theory of Knowledge" (with M.
Sato, S. Igarashi, and T. Hayashi), "Proceedings of the Fifth
International Joint Conference on Artificial Intelligence," M.I.T.,
Cambridge, Mass.
McCarthy, John (1979): "First Order Theories of Individual Concepts
and Propositions," in D. Michie (ed.), "Machine Intelligence 1,"
University of Edinburgh Press, Edinburgh.
McCarthy, John (1979): "Ascribing Mental Qualities to Machines," in
"Philosophical Perspectives in Artificial Intelligence," M. Ringle
(ed.), Harvester Press, July 1979.
* * * * * * *
! Page 6
* * * * * * *
SEMINAR ON WHY DISCOURSE WON'T GO AWAY
This is to remind you of the continuation of our seminar. The
focus this term is on DISCOURSE. The first speaker will be Barbara
Grosz, who has been working on discourse phenomena for a long time.
She will give some perspectives on research on discourse and focus us
on some particular pieces of discourse that we shall be analyzing this
term. The following is a statement of will and purpose for the
seminar as a whole.
Last term we asked (rhetorically): Why won't CONTEXT go away?
This term, we ask (again rhetorically): Why won't DISCOURSE go away?
There are two naive motivations for presupposing that the
question has a real bite. The first is very general; it is a truism
but perhaps an important one, namely, that natural language comes in
chunks of sentences. These chunks are produced and understood quite
easily by people. They are meaningful as a unit. People seem to have
intuitions about their internal coherence. They seem to enjoy
relations (follow from, be paraphrases of, summaries of, relevant to,
etc.) with each other--chunks of discourse are related much as
constituents of sentences are related (though the basis for these
discourse relations may be different).
Furthermore, they (please note the perfectly understandable
INTER-discourse pronoun) are the backbone of multiperson communication
(i.e., dialogues, question-answering interactions, etc.). As such,
new types of adequacy conditions seem to grow on them and, again, we
all seem to abide (most of the time) by those conditions.
Finally, there is the general methodological feeling, again very
naive but sensible, that is analogous to the case of physical
theories: We have theories for subatomic particles, atoms, and
molecules (forget all the intermediate possibilities). Would it be
imaginable to focus just on subatomic particles or atoms? Surely not.
Actual history teaches us that molecular theories have been the focus
BEFORE subatomic theories. The fact that (formally oriented)
semantics has been done for a long time in a purely a priori way,
mimicking the model theory of logical languages, may explain the
opposite direction that we encounter in most language research. So,
if you try to be very naive and just LOOK at the natural phenomenon,
it's there, like Salt or Carbon Dioxide.
(continued on next page)
! Page 7
(continued from page 6)
Now, all this sounds terribly naive. We usually couple it with
the second type of justification for an enterprise: There are actual
linguistic phenomena that seem to go beyond the sentential level. To
drop some names, many scholars seem to consider phenomena like
anaphora, temporal reference, intentional contexts (with a "t"),
definite descriptions, and presuppositions, as being inextricably
linked to the discourse level. The connection between our two
questions then becomes more clear: As a discourse unfolds, context
changes. We cannot account for the effects of context on
interpretation without considering its dynamic nature. The
interpretation of an utterance affects context as well as being
affected by it.
In this seminar, we want to try to get at some of the general
questions of discourse structure and meaning (what if we HAVE to
relate to the discourse, not just the sentential, level in our
analyses?) and the more specific questions having to do with anaphora,
tense, and reference. The program for January is:
Jan. 10 B. Grosz "Discourse Structure and Referring Expressions"
Jan. 17 J. Perry
Jan. 24 J. Perry
Jan. 31 K. Donnellan (visting from UCLA)
Later speakers: R. Perrault, D. Appelt, S. Soames (Princeton), H. Kamp
(London) P. Suppes, and (possibly) S. Weinstein (U. Pennsylvania).
******* Time: as in last term, 3.15 pm, Ventura Hall, Tuesday. *******
ABSTRACT: "Discourse Structure and Referring Expressions"
by Barbara Grosz
The utterances of a discourse combine into units that are
typically larger than a single utterance, but smaller than the
complete discourse. The utterances that contribute to a particular
unit do not necessarily occur in a linear sequence. It is common both
for contiguous utterances to belong to different units and for
noncontiguous utterances to belong to the same unit. An individual
unit exhibits both internal coherence and coherence with other units.
That is, discourses have been shown to have two levels of coherence:
local coherence (tying the individual utterances in a unit) and global
coherence (relating the different units to one another). Certain uses
of definite descriptions and pronouns have been shown to interact
differently within these two levels. The presentation will examine
several different samples of discourse, review some work within AI
that treats various of these issues, and describe some important open
problems.
* * * * * * *
! Page 8
* * * * * * *
STANFORD LINGUISTICS COLLOQUIUM
"The Semantics of Domain Adverbs" by Tom Ernst, Indiana University
Tuesday, Jan. 17, 3:15 p.m.
200-217 (History Corner), Stanford
Refreshments will be served after the talk at the Linguistics
Dept. Reading Room, Bldg. 100 on the Inner Quad. Upcoming colloquia:
Jan. 31 Francisca Sanchez, Stanford University
"A Sociolinguistic Study of Chicano Spanish"
Feb. 7 R. M. W. Dixon, Australia National University
Topic to be announced.
* * * * * * *
ISSUES IN PERCEPTION, LANGUAGE, AND COGNITION (PSYCH 279)
The purpose of this seminar is to work at developing a more
complete understanding of intelligent behavior. Toward this end, we
have invited a distinguished group of psychologists, philosophers, and
computer scientists, each with a different approach to the study of
mind, to lecture on their work. By juxtaposing and contrasting their
different views, we hope to learn something of how the various facets
of intelligence relate to one another.
WHEN: Mondays from noon to 1:15, starting January 16
WHERE: Jordan Hall (Psychology) room 100
START: Monday, January 16
First Meeting: "The Logic of Perception," Professor George Sperling,
NYU Psychology Dept. and Bell Labs
We will post weekly announcements of the speakers and titles; our
(tentative) schedule for the next eight weeks is as follows:
To be announced
Lynn Cooper, LRDC and Center for Advanced Study
Amos Tversky, Stanford Psychology
Roger Shepard, Stanford Psychology
Phil Cohen, Fairchild Laboratory for Artificial Intelligence Research
Hershel Liebowitz, Penn. State Psychology and Center for Advanced Study
R. F. Thompson, Stanford Psychology Dept.
Jon Barwise, Stanford Philosophy Dept. and CSLI
For further information contact
Brian Wandell or Sandy Pentland
Stanford Psychology SRI International
(497-3748) (859-6154)
* * * * * * *
! Page 9
* * * * * * *
LINGUISTICS 130/230 - INTRODUCTION TO SYNTACTIC THEORY
WINTER, 1983-84
STAFF: Ivan A. Sag [principal instructor]
Jordan Hall 478 [tel. 497-3875]
Per-Kristian Halvorsen [lecturer]
Susannah Mackaye [assistant]
Jordan Hall 027 [tel. 497-0924]
LECTURES: Monday and Wednesday 1:15-3:05, in 50-51R
TUTORIALS: Optional tutorials to be held weekly
AIMS OF THE COURSE: To introduce the goals and some of the methods
of current work in generative syntax. The major
emphasis will be on the development of concepts
fundamental to such current approaches as Generalized
Phrase Structure Grammar and Lexical-Functional
Grammar. First-year graduate students should also be
enrolled in Linguistics 200 (Wasow), which presents
an historical perspective on transformational grammar
and "Government-Binding Theory".
SYLLABUS:
I. Constituent Structure: Phrase Structure rules and notations;
Subcategorization dependencies; Semantic Interpretation;
Syntactic features and feature conventions; X-Bar Theory.
II. Limitations of standard context-free phrase structure grammars.
A. Passives and Datives
B. Auxiliaries and Inversion
C. Unbounded Dependency Constructions
III. Presentation and comparison of various techniques developed
to overcome these limitations: transformations, lexical rules,
metarules, feature instantiation principles.
IV. Dependent Elements: "Dummy" pronouns, reflexives and reciprocals.
Their consequences for feature-based phrase structure approaches.
Their consequences for lexically-based functional approaches.
V. Infinitives and related structures: Control and Complementation
in GPSG and LFG.
* * * * * * *
! Page 10
* * * * * * *
LINGUISTICS 233:
TOPICS IN SYNTACTIC THEORY: GENERALIZED PHRASE STRUCTURE GRAMMAR
INSTRUCTORS: Ivan A. Sag and Carl J. Pollard
MEETINGS: WEDNESDAY, 3:15-6:05, IN JORDAN HALL (PSYCH BLDG) RM. 048
ABSTRACT: In this seminar we will present (and suffer criticisms of)
two manuscripts now in final preparation stage: Gazdar,
Klein, Pullum and Sag's ENGLISH SYNTAX: A STUDY IN GENERALIZED
PHRASE STRUCTURE GRAMMAR and Pollard's GPSG'S, HEAD GRAMMARS
AND NATURAL LANGUAGE. Both studies focus on the relation between
syntax and semantics, develop explicit proposals concerning
syntactic features and grammatical categories, and treat a
wide array of empirical phenomena, primarily in English.
* * * * * * *
PHILOSOPHY 186: TOPICS IN MIND AND ACTION
MWF 9 Instructor: Helen Nissenbaum
90-92Q Office : C41
Office hrs: Friday 10-12
We will examine and compare the concepts of Emotion and Intellect.
Course readings include Philosophical and Psychological theories of
Emotion and Intellect spanning three centuries. We will look at a
number of the features traditionally thought to characterize Emotion
and Intellect and use these as a basis for comparing these two aspects
of mind. Examining Intellect we will take into account work in
Artificial Intelligence and the aspects of Intellect that it
presupposes.
Required Texts:
Ryle, The Concept of Mind
Haugland (ed.), Mind Design
Readings. A collection of Xeroxed articles.
Recommended:
Rorty (ed.), Explaining Emotion
Boden, Artificial intelligence and Natural Man
Jan 9-13: Introduction: Two categories of mind. What features
distinguish them?
Jan 16-20: Descartes on Reason; Descartes on Passion; Hume on Reason
Jan 23-27: Hume on Passion; Reason vs. Emotion in Explaining Action
(continued on next page)
! Page 11
(continued from page 10)
Jan 30-Feb 3: Role of the Body; Theories of Emotion
Feb 6-10: What AI teaches us about the conception of Intellect
Feb 13-17: Examples of Computational Models of Aspects of Intellect;
Problems
Feb 20-24: An Alternate Vision of Mind
Feb 27- March 2: An Alternate Vision of Mind, continued
March 5-12: Rationality; Emotion and Cognition--partial conciliation?
March 13-15: Review
* * * * * * *
Philosophy 266 TOPICS IN PHILOSOPHICAL LOGIC
by Johan van Benthem
Time: Tuesdays 13:15-15:05
Place: Bldg. 90 rm. 92Q, Philosophy Bldg.*--SEE NOTE BELOW--*
This course consists of an introduction to `classical'
intensional logic, followed by a presentation of some current trends
in this area of research. In the introductory part, examples will be
drawn from the logic of tense, modality, and conditionals. Current
trends to be presented are the transition from `total' to `partial'
models, the various uses of the generalized quantifier perspective,
and newer `dynamic' accounts of semantic interpretation.
Intensional logic has various aspects: philosophical,
mathematical, and linguistical. In particular, this course provides
some broader logical background for those interested in the semantics
of natural language.
*Starting January 24, this will be changed to a more convenient time,
probably Wednesdays, 10:15-12:00, Seminar Room, CSLI, Ventura Hall.
* * * * * * *
! Page 12
* * * * * * *
TALKWARE SEMINAR - CS 377
Terry Winograd
Date: MONDAY January 16 * Note change of day *
Speaker: Laura Gould and William Finzer (Xerox PARC LRG)
Topic: Programming by Rehearsal
Time: 2:15-4:00
Place: To be announced later this week
Abstract:
Programming by Rehearsal is the name given to a graphical
programming environment, devised for the use of teachers or curriculum
designers who want to construct interactive, instructional activities
for their students to use. The process itself relies heavily on
interactive graphics and allows designers to react immediately to
their emerging products by showing them, at all stages of development,
exactly what their potential users will see. The process is quick,
easy, and fun to engage in; a simple activity may be constructed in
less than half an hour.
In using the system, designers rely heavily on a set of
predefined `performers', each of which comes equipped with a set of
predefined actions; each action is associated with a specific `cue'.
A designer can `audition' a performer to see how it behaves by
selecting its various cues and watching its performance. The system
also allows the designer to construct new performers and to teach them
new cues. A large help system provides procedural as well as
descriptive assistance. Programming by Rehearsal is implemented in
Smalltalk-80 and runs on a Dorado. The current system contains 18
predefined performers from which several dozen productions have been
made, some by nonprogrammers. A video tape will be shown that
illustrates not only the productions but also the process by which
they were created.
* * * * * * *
TALK AT SRI BY PETER PEPPER
Dr. Peter Pepper, from the Institut fuer Informatik of the
Technischen Universitaet in Munich, will give a talk on Wednesday,
January 18, at 4:15 p.m. in the AIC Conference Room, EK242, at SRI
International. He will speak on ``Implementation of Algebraic Data
Types.'' Abstract data types have become a major tool for the
specification of software products. However, there remains the issue
of deriving implementations from given specifications. Fortunately, a
great part of this process can be done within the framework of
algebraic specification techniques. Pepper will discuss a few
operations on abstract data types that can be used to derive
implementations. Moreover, the notion of ``equivalence'' of types
based on observable behaviour is investigated, which leads to a notion
of correctness of type implementations.
* * * * * * *
! Page 13
* * * * * * *
ALICE: A PARALLEL GRAPH-REDUCTION MACHINE
FOR DECLARATIVE AND OTHER LANGUAGES
SPEAKER: John Darlington, Department of Computing, Imperial College,
London
WHEN: Monday, January 23, 4:30pm
WHERE: AIC Conference Room, EK242, SRI International
ABSTRACT:
Alice is a highly parallel-graph reduction machine being designed
and built at Imperial College. Although designed for the efficient
execution of declarative languages, such as functional or logic
languages, ALICE is general purpose and can execute sequential
languages also.
This talk will describe the general model of computation,
extended graph reduction, that ALICE executes, outline how different
languages can be supported by this model, and describe the concrete
architecture being constructed. A 24-processor prototype is planned
for early 1985. This will give a two-orders-of-magnitude improvement
over a VAX 11/750 for declarative languages. ALICE is being
constructed out of two building blocks, a custom-designed switching
chip and the INMOS transputer. So far, compilers for a functional
language, several logic languages, and LISP have been constructed.
* * * * * * *
MEETING OF THE SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY
CALL FOR PAPERS
1984 ANNUAL MEETING
The Society for Philosophy and Psychology is calling for papers to
be read at its 10th annual meeting on May 16-20 at Massachusetts
Institute of Technology, Cambridge, Massachusetts.
The Society consists of psychologists and philosophers with common
interests in the study of behavior, cognition, language, the nervous
system, artificial intelligence, emotion, consciousness, and the
foundations of psychology.
The 1984 meeting of the Society will be run conjointly with the
MIT-Sloan conference on Philosophy and Psychology. Tyler Burge, Noam
Chomsky, Daniel Dennett, and Hartry Field will deliver papers at the
MIT-Sloan Conference with commentary by Ned Block, Robert Cummins,
Fred Dretske, Gilbert Harman, John Haugeland, Brian Loar, Barbara
Partee, Hilary Putnam, Zenon Pylyshyn, John Searle, Robert Stalnaker,
and Stephen Stich.
(continued on next page)
! Page 14
(continued from page 13)
Contributed papers will be selected on the basis of quality and
possible interest to both philosophers and psychologists. Contributed
papers are for oral presentation and should not exceed a length of 30
minutes (about 12 double-spaced pages). The deadline for submission
is February 28, 1984.
Please submit three copies of your contributed paper to:
Professor Robert Cummins
Department of Philosophy
University of Illinois-Chicago
Chicago, IL 60680
Individuals interested in becoming members of the Society should
send $15.00 membership dues ($5.00 for students) to Professor Owen
Flanagan, Department of Philosophy, Wellesley College, Wellesley MA
02181.
* * * * * * *
PLEASE GET ANNOUNCEMENTS IN BY WEDNESDAY NOON OF EACH WEEK.
It benefits everyone most when activities are announced one week
in advance, so please get whatever information you can to me as early
as possible--even if there are still some details missing!
Send your news to CSLI-NEWSLETTER@SRI-AI or to DKANERVA@SRI-AI.
Or phone me at 497-1712 or send mail to me at Ventura Hall, Stanford,
CA 94305--whatever helps you get your announcement out to CSLI friends
as early as possible.
Thank you! - Dianne Kanerva
-------
∂14-Jan-84 1035 CLT SEMINAR at SRI
To: "@DIS.DIS[1,CLT]"@SU-AI
SUBJECT - ALICE
SPEAKER - John Darlington, Department of Computing, Imperial College,
London
WHEN - Monday, January 23, 4:30pm
WHERE - AIC Conference Room, EK242, SRI
ALICE
ALICE: A parallel graph-reduction machine for declarative and other
languages.
ABSTRACT
Alice is a highly parallel-graph reduction machine being designed and
built at Imperial College. Although designed for the efficient
execution of declarative languages, such as functional or logic
languages, ALICE is general purpose and can execute sequential
languages also.
This talk will describe the general model of computation, extended
graph reduction, that ALICE executes, outline how different languages
can be supported by this model, and describe the concrete architecture
being constructed. A 24-processor prototype is planned for early
1985. This will give a two-orders-of-magnitude improvement over a VAX
11/750 for derclarative languages. ALICE is being constructed out of
two building blocks, a custom-designed switching chip and the INMOS
transputer. So far, compilers for a functional language, several logic
languages, and LISP have been constructed.
-------
Those outside SRI can assemble in the lobby of the engineering building
(on ravenswood opposite the pine street intersection) to be escorted in.
-- WALDINGER@SRI-AI
∂14-Jan-84 1043 @SRI-AI.ARPA:CLT@SU-AI SEMINAR at SRI
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Jan 84 10:43:42 PST
Received: from SU-AI.ARPA by SRI-AI.ARPA with TCP; Sat 14 Jan 84 10:43:37-PST
Date: 14 Jan 84 1035 PST
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR at SRI
To: "@DIS.DIS[1,CLT]"@SU-AI
SUBJECT - ALICE
SPEAKER - John Darlington, Department of Computing, Imperial College,
London
WHEN - Monday, January 23, 4:30pm
WHERE - AIC Conference Room, EK242, SRI
ALICE
ALICE: A parallel graph-reduction machine for declarative and other
languages.
ABSTRACT
Alice is a highly parallel-graph reduction machine being designed and
built at Imperial College. Although designed for the efficient
execution of declarative languages, such as functional or logic
languages, ALICE is general purpose and can execute sequential
languages also.
This talk will describe the general model of computation, extended
graph reduction, that ALICE executes, outline how different languages
can be supported by this model, and describe the concrete architecture
being constructed. A 24-processor prototype is planned for early
1985. This will give a two-orders-of-magnitude improvement over a VAX
11/750 for derclarative languages. ALICE is being constructed out of
two building blocks, a custom-designed switching chip and the INMOS
transputer. So far, compilers for a functional language, several logic
languages, and LISP have been constructed.
-------
Those outside SRI can assemble in the lobby of the engineering building
(on ravenswood opposite the pine street intersection) to be escorted in.
-- WALDINGER@SRI-AI
∂16-Jan-84 0217 RESTIVO@SU-SCORE.ARPA PROLOG Digest V2 #3
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Jan 84 02:16:45 PST
Date: Sunday, January 15, 1984 11:43AM
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA>
Reply-to: PROLOG@SU-SCORE.ARPA
US-Mail: P.O. Box 4584 Stanford University, Stanford CA 94305
Phone: (415) 326-5550
Subject: PROLOG Digest V2 #3
To: PROLOG@SU-SCORE.ARPA
PROLOG Digest Monday, 16 Jan 1984 Volume 2 : Issue 3
Today's Topics:
Announcements - IEEE LP Symposium & Journal of Automated Reasoning
----------------------------------------------------------------------
From: Pereira@SRI-AI (Fernando Pereira)
Subject: IEEE Logic Programming Symposium
1984 International Symposium on
Logic Programming
Student Registration Rates
In our original symposium announcements, we failed to offer a
student registration rate. We would like to correct that situation
now. Officially enrolled students may attend the symposium for the
reduced rate of $75.00.
This rate includes the symposium itself (all three days) and one
copy of the symposium proceedings. It does not include the tutorial,
the banquet, or cocktail parties. It does however, include the
Casino entertainment show.
Questions and requests for registration forms by US Mail to:
Doug DeGroot Fernando Pereira
Program Chairman SRI International
IBM Research or 333 Ravenswood Ave.
P.O. Box 218 Menlo Park, CA 94025
Yorktown Heights, NY 10598 (415) 859-5494
(914) 945-3497
or by net mail to:
ARPA Pereira@SRI-AI
UUCP !ucbvax!Pereira@SRI-AI
------------------------------
Date: 12-Jan-84 14:08:38-CST (Thu)
From: Wos@ANL-MCS (Larry Wos)
Subject: Journal of Automated Reasoning
The Journal of Automated Reasoning will begin publishing
in January of 1985. As editor-in-chief, I am officially
calling for papers on appropriate topics. The following
paragraphs describe the new journal. Please send papers
to me.
Larry Wos
MCSD
Argonne National Laboratory
9700 S. Cass Ave.
Argonne, Il. 60439
Phone: 312-972-7224
Home 312-493-0767
ARPA: Wos@ANL-MCS
Scope and Purpose:
This new journal will publish papers focusing on various aspects
of automated reasoning, maintaining a balance between theory and
application. The theoretical questions, among others, are con-
cerned with representation of knowledge, inference rules for
drawing conclusions from that knowledge, and strategies for con-
trolling the inference rules. The object of automated reasoning
is the design and implementation of a computer program that
serves as an assistant in solving problems and in answering ques-
tions that require reasoning. Under the aegis of automated rea-
soning we include, for example, the fields of automated theorem
proving, logic programming, program verification and synthesis,
expert systems, computational logic, and certain areas of artifi-
cial intelligence. As the list of fields suggests, the journal
will be interdisciplinary. The journal will publish papers that
are quite theoretical and also publish papers that emphasize as-
pects of implementation.
The developments of the past five years illustrate the value that
can accrue to one field by considering problems from another
field with apparently unrelated interests. For example, the suc-
cessful consideration of open questions in mathematics and in
formal logic and the design and validation of logic circuits led
directly to, among others, automated reasoning techniques for
generating models and counter examples with an automated reasoning
program. Evidence is mounting of the power and usefulness of
such reasoning programs. In particular, a complex encryption al-
gorithm currently in use has been proved correct by a system for
program verification. The programming language Prolog and the
expert system Mycin are examples of useful systems relying on au-
tomated reasoning.
The objective of the journal is to provide a forum for those in-
terested purely in theory, those interested primarily in imple-
mentation, and those interested in specific industrial and com-
mercial applications. Thus we shall be equally interested in
research papers and in papers discussing some application in
which automated reasoning plays a role. We shall promote an ex-
change of information between groups not always thought to share
a common interest. For example, a paper might be published dis-
cussing the prototype of some problem in industry--a problem that
would appear to be solvable with some technique from some area of
automated reasoning. A second paper might then be published with
a solution to that problem, giving the detailed methodology that
was employed and including certain implementation aspects.
Articles considered for publication must be of the highest quali-
ty and focus on some aspect of automated reasoning. All articles
will be refereed. We shall encourage the submission of articles
that survey a subfield of automated reasoning, that present some
open question, and, especially, long articles that discuss
theoretical constructs, a program that relies on those con-
structs, and evidence of the performance of that program. We can
summarize by saying that the journal will be broad in scope.
------------------------------
End of PROLOG Digest
********************
∂16-Jan-84 0913 JF@SU-SCORE.ARPA computational number theory seminar
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Jan 84 09:13:30 PST
Date: Mon 16 Jan 84 09:10:24-PST
From: Joan Feigenbaum <JF@SU-SCORE.ARPA>
Subject: computational number theory seminar
To: aflb.local@SU-SCORE.ARPA
cc: sharon@SU-SCORE.ARPA
i have reserved
MJH 301, Tuesday, January 24, 2:15-3:15 p.m.
for the first meeting of the prospective computational number theory seminar.
please send me mail if you definitely plan to come or if you definitely
cannot come because of a conflict. if there are more people in the second
category than the first, i will try to arrange an alternate time--when i
return to town in three days (thursday).
during the first meeting, we will try to get organized and i will try to
present some background material. if you miss it, you should be able to
come to the next one and follow with little trouble.
thanks,
joan
-------
∂16-Jan-84 1357 GOLUB@SU-SCORE.ARPA agenda for 1-17
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Jan 84 13:57:03 PST
Date: Mon 16 Jan 84 13:54:14-PST
From: Gene Golub <GOLUB@SU-SCORE.ARPA>
Subject: agenda for 1-17
To: CSD-Senior-Faculty: ;
We have a number of items to discuss.
AGENDA
1. Re-appointment of Schreiber
2. Promotion of Lenat
3. Possible joint appointment with psychology
The meeting will take place in MJH 252 at 2:30 pm. If you
have other items to discuss, please let me know.
GENE
-------
∂16-Jan-84 1606 YEARWOOD@SU-SCORE.ARPA Proposed space solution
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Jan 84 16:05:48 PST
Date: Mon 16 Jan 84 16:04:01-PST
From: Marlene Yearwood <YEARWOOD@SU-SCORE.ARPA>
Subject: Proposed space solution
To: CSD-Faculty: ;
cc: SPACE-COMMITTEE: ;
Stanford-Phone: (415) 497-2266
The space committee met on 11 Jan 1984 to consider several problems regarding
the use of space in MJH in order to prepare a recommendation to Prof. Golub
for their solution.
Among these issues were:
1. Additional space for the systems group.
2. Second floor space for publications.
3. Terminal space for MS project students.
The Committee felt that these problems could best be solved with the following
plan which continues to consolidate like functions and public areas. We are
circulating our plan for comments before we make our recommendation to Prof.
Golub.
The proposed plan:
1. Give room 421 (currently Publications) to the Systems Group.
2. Move Publications to room 221 (currently the Dover room)
3. Move the Dover and Xerox to room 225 which is air conditioned.
4. Move the terminals in room 225 to room 351 which is larger,
next to the third floor lounge where students can wait for
terminal access and consolidates our public areas.
Please send your comments to Yearwood@su-score and Oliger@su-navajo by
Wed 18 Jan.
Joe Oliger, Chairman of the Space Committee
-------
∂16-Jan-84 2135 ULLMAN@SU-SCORE.ARPA next meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Jan 84 21:35:09 PST
Date: Mon 16 Jan 84 21:34:26-PST
From: Jeffrey D. Ullman <ULLMAN@SU-SCORE.ARPA>
Subject: next meeting
To: CS440: ;
The meeting this Thursday will feature me, talking about
"Some Thoughts on Supercomputers."
I'll talk about sorting machines, their equivalence to most of the
"AI" problems that one hears about, their use in speeding up
parallel "combinatorial implosion," and some impossibility
results about fast sorting machines.
Remember that there is no meeting on the 26th because of the Forsythe
lectures.
-------
∂16-Jan-84 2244 LAWS@SRI-AI.ARPA AIList Digest V2 #7
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Jan 84 22:44:15 PST
Date: Mon 16 Jan 1984 21:55-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V2 #7
To: AIList@SRI-AI
AIList Digest Tuesday, 17 Jan 1984 Volume 2 : Issue 7
Today's Topics:
Production Systems - Requests,
Expert Systems - Software Debugging Aid,
Logic Programming - Prolog Textbooks & Disjunction Problem,
Alert - Fermat's Last Theorem Proven?,
Seminars - Mulitprocessing Lisp & Lisp History,
Conferences - Logic Programming Discount & POPL'84,
Courses - PSU's First AI Course & Net AI Course
----------------------------------------------------------------------
Date: 11 Jan 1984 1151-PST
From: Jay <JAY@USC-ECLC>
Subject: Request for production systems
I would like pointers to free or public domain production systems
(running on Tops-20, Vax-Unix, or Vax-Vms) both interpreters (such as
ross) and systems built up on them (such as emycin). I am especially
interested in Rosie, Ross, Ops5, and Emycin. Please reply directly to
me.
j'
ARPA: jay@eclc
------------------------------
Date: Thu 12 Jan 84 12:13:20-MST
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Taxonomy of Production Systems
I'm looking for info on a formal taxonomy of production rule systems,
sufficiently precise that it can distinguish OPS5 from YAPS, but also say
that they're more similar than either of them is to Prolog. The only
relevant material I've seen is the paper by Davis & King in MI 8, which
characterizes PSs in terms of syntax, complexity of LHS and RHS, control
structure, and "programmability" (seems to mean meta-rules). This is
a start, but too vague to be implemented. A formal taxonomy should
indicate where "holes" exist, that is, strange designs that nobody has
built. Also, how would Georgeff's (Stanford STAN-CS-79-716) notion of
"controlled production systems" fit in? He showed that CPSs are more
general than PSs, but then one can also show that any CPS can be represented
by some ordinary PS. I'm particularly interested in formalization of
the different control strategies - are text order selection (as in Prolog)
and conflict resolution (as in OPS5) mutually exclusive, or can they be
intermixed (perhaps using text order to find 5 potential rules, then
conflict resolution to choose among the 5). Presumably a sufficiently
precise taxonomy could answer these sorts of questions. Has anyone
looked at these questions?
stan shebs
------------------------------
Date: 16 Jan 84 19:13:21 PST (Monday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Expert systems for software debugging?
Debugging is a black art, not at all algorithmic, but almost totally
heuristic. There is a lot of expert knowledge around about how to debug
faulty programs, but it is rarely written down or systemetized. Usually
it seems to reside solely in the minds of a few "debugging whizzes".
Does anyone know of an expert system that assists in software debugging?
Or any attempts (now or in the past) to produce such an expert?
/Ron
------------------------------
Date: 12 Jan 84 20:43:31-PST (Thu)
From: harpo!floyd!clyde!akgua!sb1!mb2c!uofm-cv!lah @ Ucb-Vax
Subject: prolog reference
Article-I.D.: uofm-cv.457
Could anybody give some references to good introductory book
on prolog?
------------------------------
Date: 14 Jan 84 14:50:57-PST (Sat)
From: decvax!duke!mcnc!unc!bts @ Ucb-Vax
Subject: Re: prolog reference
Article-I.D.: unc.6594
There's only one introductory book I know of, that's Clocksin
and Mellish's "Programming in Prolog", Springer-Verlag, 1981.
It's a silver paperback, probably still under $20.00.
For more information on the language, try Clark and Tarnlund's
"Logic Programming", Academic Press, 1982. It's a white hard-
back, with an elephant on the cover. The papers by Bruynooghe
and by Mellish tell a lot about Prolog inplementation.
Bruce Smith, UNC-Chapel Hill
decvax!duke!unc!bts (USENET)
bts.unc@CSnet-Relay (lesser NETworks)
------------------------------
Date: 13 Jan 84 8:11:49-PST (Fri)
From: hplabs!hao!seismo!philabs!sbcs!debray @ Ucb-Vax
Subject: re: trivial reasoning problem?
Article-I.D.: sbcs.572
Re: Marcel Schoppers' problem: given two lamps A and B, such that:
condition 1) at least one of them is on at any time; and
condition 2) if A is on then B id off,
we are to enumerate the possible configurations without an exhaustive
generate-and-test strategy.
The following "pure" Prolog program that will generate the various
configurations without exhaustively generating all possible combinations:
config(A, B) :- cond1(A, B), cond2(A, B). /* both conditions must hold */
cond1(1, ←). /* at least one is on an any time ... condition 1 above */
cond1(←, 1).
cond2(1, 0). /* if A is on then B is off */
cond2(0, ←). /* if A is off, B's value is a don't care */
executing Prolog gives:
| ?- config(A, B).
A = 1
B = 0 ;
A = 0
B = 1 ;
no
| ?- halt.
[ Prolog execution halted ]
Tracing the program shows that the configuration "A=0, B=0" is not generated.
This satisfies the "no-exhaustive-listing" criterion. Note that attempting
to encode the second condition above using "not" will be both (1) not pure
Horn Clause, and (2) using exhaustive generation and filtering.
Saumya Debray
Dept. of Computer Science
SUNY at Stony Brook
{floyd, bunker, cbosgd, mcvax, cmcl2}!philabs!
\
Usenet: sbcs!debray
/
{allegra, teklabs, hp-pcd, metheus}!ogcvax!
CSNet: debray@suny-sbcs@CSNet-Relay
[Several other messages discussing this problem and suggesting Prolog
code were printed in the Prolog Digest. Different writers suggested
very different ways of structuring the problem. -- KIL]
------------------------------
Date: Fri 13 Jan 84 11:16:21-CST
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Fermat's Last Theorem Proven?
[Reprinted from the UTEXAS-20 bboard.]
There was a report last night on National Public Radio's All Things Considered
about a British mathematician named Arnold Arnold who claims to have
developed a new technique for dealing with multi-variable, high-dimensional
spaces. The method apparently makes generation of large prime numbers
very easy, and has applications in genetics, the many-body problem, orbital
mechanics, etc. Oh yeah, the proof to Fermat's Last Theorem falls out of
this as well! The guy apparently has no academic credentials, and refuses
to publish in the journals because he's interested in selling his technique.
There was another mathematician named Jeffrey Colby who had been allowed
to examine Arnold's work on the condition he didn't disclose anything.
He claims the technique is all it's claimed to be, and shows what can
be done when somebody starts from pure ignorance not clouded with some
of the preconceptions of a formal mathematical education.
If anybody hears more about this, please pass it along.
Clive
------------------------------
Date: 12 Jan 84 2350 PST
From: Rod Brooks <ROD@SU-AI>
Subject: Next week's CSD Colloquium.
[Reprinted from the SU-SCORE bboard.]
Dr. Richard P. Gabriel, Stanford CSD
``Queue-based Multi-processing Lisp''
4:30pm Terman Auditorium, Jan 17th.
As the need for high-speed computers increases, the need for
multi-processors will be become more apparent. One of the major stumbling
blocks to the development of useful multi-processors has been the lack of
a good multi-processing language---one which is both powerful and
understandable to programmers.
Among the most compute-intensive programs are artificial intelligence (AI)
programs, and researchers hope that the potential degree of parallelism in
AI programs is higher than in many other applications. In this talk I
will propose a version of Lisp which is multi-processed. Unlike other
proposed multi-processing Lisps, this one will provide only a few very
powerful and intuitive primitives rather than a number of parallel
variants of familiar constructs.
The talk will introduce the language informally, and many examples along
with performance results will be shown.
------------------------------
Date: 13 January 1984 07:36 EST
From: Kent M Pitman <KMP @ MIT-MC>
Subject: What is Lisp today and how did it get that way?
[Reprinted from the MIT-MC bboard.]
Modern Day Lisp
Time: 3:00pm
Date: Wednesdays and Fridays, 18-27 January
Place: 8th Floor Playroom
The Lisp language has changed significantly in the past 5 years. Modern
Lisp dialects bear only a superficial resemblance to each other and to
their common parent dialects.
Why did these changes come about? Has progress been made? What have we
learned in 5 hectic years of rapid change? Where is Lisp going?
In a series of four lectures, we'll be surveying a number of the key
features that characterize modern day Lisps. The current plan is to touch
on at least the following topics:
Scoping. The move away from dynamic scoping.
Namespaces. Closures, Locales, Obarrays, Packages.
Objects. Actors, Capsules, Flavors, and Structures.
Signals. Errors and other unusual conditions.
Input/Output. From streams to window systems.
The discussions will be more philosophical than technical. We'll be
looking at several Lisp dialects, not just one. These lectures are not
just something for hackers. They're aimed at just about anyone who uses
Lisp and wants an enhanced appreciation of the issues that have shaped
its design and evolution.
As it stands now, I'll be giving all of these talks, though there
is some chance there will be some guest lecturers on selected
topics. If you have questions or suggestions about the topics to be
discussed, feel free to contact me about them.
Kent Pitman (KMP@MC)
NE43-826, x5953
------------------------------
Date: Wed 11 Jan 84 16:55:02-PST
From: PEREIRA@SRI-AI.ARPA
Subject: IEEE Logic Programming Symposium (update)
1984 International Symposium on
Logic Programming
Student Registration Rates
In our original symposium announcements, we failed to offer a student
registration rate. We would like to correct that situation now.
Officially enrolled students may attend the symposium for the reduced
rate of $75.00.
This rate includes the symposium itself (all three days) and one copy
of the symposium proceedings. It does not include the tutorial, the
banquet, or cocktail parties. It does however, include the Casino
entertainment show.
Questions and requests for registration forms by US mail to:
Doug DeGroot Fernando Pereira
Program Chairman SRI International
IBM Research or 333 Ravenswood Ave.
P.O. Box 218 Menlo Park, CA 94025
Yorktown Heights, NY 10598 (415) 859-5494
(914) 945-3497
or by net mail to:
PEREIRA@SRI-AI (ARPANET)
...!ucbvax!PEREIRA@SRI-AI (UUCP)
------------------------------
Date: Tue 10 Jan 84 15:54:09-MST
From: Subra <Subrahmanyam@UTAH-20.ARPA>
Subject: *** P O P L 1984 --- Announcement ***
******************************* POPL 1984 *********************************
ELEVENTH ANNUAL
ACM SIGACT/SIGPLAN
SYMPOSIUM ON
PRINCIPLES OF
PROGRAMMING LANGUAGES
*** POPL 1984 will be held in Salt Lake City, Utah January 15-18. ****
(The skiing is excellent, and the technical program threatens to match it!)
For additional details, please contact
Prof. P. A. Subrahmanyam
Department of Computer Science
University of Utah
Salt Lake City, Utah 84112.
Phone: (801)-581-8224
ARPANET: Subrahmanyam@UTAH-20 (or Subra@UTAH-20)
------------------------------
Date: 12 Jan 84 4:51:51-PST (Thu)
From:
Subject: Re: PSU's First AI Course - Comment
Article-I.D.: sjuvax.108
I would rather NOT get into social issues of AI: there are millions of
forums for that (and I myself have all kinds of feelings and reservations
on the issue, including Vedantic interpretations), so let us keep this
one technical, please.
------------------------------
Date: 13 Jan 84 11:42:21-PST (Fri)
From:
Subject: Net AI course -- the communications channel
Article-I.D.: psuvax.413
Responses so far have strongly favored my creating a moderated newsgroup
as a sub to net.ai for this course. Most were along these lines:
From: ukc!srlm (S.R.L.Meira)
I think you should act as the moderator, otherwise there would be too
much noise - in the sense of unordered information and discussions -
and it could finish looking like just another AI newsgroup argument.
Anybody is of course free to post whatever they want if they feel
the thing is not coming out like they want.
Also, if the course leads to large volume, many net.ai readers (busy AI
professionals rather than students) might drop out of net.ai.
For a contrasting position:
From: cornell!nbires!stcvax!lat
I think the course should be kept as a newsgroup. I don't think
it will increase the nation-wide phone bills appreciably beyond
what already occurs due to net.politics, net.flame, net.religion
and net.jokes.
So HERE's how I'll try to keep EVERYBODY happy ... :-)
... a "three-level" communication channel. 1: a "free-for-all" via mail
(or possibly another newsgroup), 2: a moderated newsgroup sub to net.ai,
3: occasional abstracts, summaries, pointers posted to net.ai and AIList.
People can then choose the extent of their involvement and set their own
"bull-rejection threshold". (1) allows extensive involvement and flaming,
(2) would be the equivalent of attending a class, and (3) makes whatever
"good stuff" evolves from the course available to all others.
The only remaining question: should (1) be done via a newsgroup or mail?
Please send in your votes -- I'll make the final decision next week.
Now down to the REALLY BIG decisions: names. I suggest "net.ai.cse"
for level (2). The "cse" can EITHER mean "Computer Science Education"
or abbreviate "course". For level (1), how about "net.ai.ffa" for
"free-for-all", or .raw, or .disc, or .bull, or whatever.
Whatever I create gets zapped at end of course (June), unless by then it
has taken on a life of its own.
-- Bob
[PS to those NOT ON USENET: please mail me your address for private
mailings -- and indicate which of the three "participation levels"
best suits your tastes.]
Bob Giansiracusa (Dept of Computer Science, Penn State Univ, 814-865-9507)
UUCP: bobgian@psuvax.UUCP -or- allegra!psuvax!bobgian
Arpa: bobgian@PSUVAX1 -or- bobgian%psuvax1.bitnet@Berkeley
Bitnet: bobgian@PSUVAX1.BITNET CSnet: bobgian@penn-state.csnet
USnail: 333 Whitmore Lab, Penn State Univ, University Park, PA 16802
------------------------------
End of AIList Digest
********************
∂17-Jan-84 0800 PATASHNIK@SU-SCORE.ARPA CSD student meeting
Received: from SU-SCORE by SU-AI with TCP/SMTP; 17 Jan 84 08:00:10 PST
Date: Tue 17 Jan 84 07:54:20-PST
From: Oren Patashnik <PATASHNIK@SU-SCORE.ARPA>
Subject: CSD student meeting
To: students@SU-SCORE.ARPA
cc: faculty@SU-SCORE.ARPA, secretaries@SU-SCORE.ARPA, ras@SU-SCORE.ARPA
Final reminder---we are having a student meeting this week at noon on
Wednesday in 420-041 (basement of Psychology building). If you have
anything you'd like to have on the agenda, please let us know.
--Eric and Oren, bureaucrats
-------